Jan 26 15:29:14 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 26 15:29:14 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 26 15:29:14 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 15:29:14 localhost kernel: BIOS-provided physical RAM map:
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 26 15:29:14 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 26 15:29:14 localhost kernel: NX (Execute Disable) protection: active
Jan 26 15:29:14 localhost kernel: APIC: Static calls initialized
Jan 26 15:29:14 localhost kernel: SMBIOS 2.8 present.
Jan 26 15:29:14 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 26 15:29:14 localhost kernel: Hypervisor detected: KVM
Jan 26 15:29:14 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 26 15:29:14 localhost kernel: kvm-clock: using sched offset of 5050204763 cycles
Jan 26 15:29:14 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 26 15:29:14 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 26 15:29:14 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 26 15:29:14 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 26 15:29:14 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 26 15:29:14 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 26 15:29:14 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 26 15:29:14 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 26 15:29:14 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 26 15:29:14 localhost kernel: Using GB pages for direct mapping
Jan 26 15:29:14 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 26 15:29:14 localhost kernel: ACPI: Early table checksum verification disabled
Jan 26 15:29:14 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 26 15:29:14 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 15:29:14 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 15:29:14 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 15:29:14 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 26 15:29:14 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 15:29:14 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 15:29:14 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 26 15:29:14 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 26 15:29:14 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 26 15:29:14 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 26 15:29:14 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 26 15:29:14 localhost kernel: No NUMA configuration found
Jan 26 15:29:14 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 26 15:29:14 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 26 15:29:14 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 26 15:29:14 localhost kernel: Zone ranges:
Jan 26 15:29:14 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 26 15:29:14 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 26 15:29:14 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 15:29:14 localhost kernel:   Device   empty
Jan 26 15:29:14 localhost kernel: Movable zone start for each node
Jan 26 15:29:14 localhost kernel: Early memory node ranges
Jan 26 15:29:14 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 26 15:29:14 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 26 15:29:14 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 15:29:14 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 26 15:29:14 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 26 15:29:14 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 26 15:29:14 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 26 15:29:14 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 26 15:29:14 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 26 15:29:14 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 26 15:29:14 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 26 15:29:14 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 26 15:29:14 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 26 15:29:14 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 26 15:29:14 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 26 15:29:14 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 26 15:29:14 localhost kernel: TSC deadline timer available
Jan 26 15:29:14 localhost kernel: CPU topo: Max. logical packages:   8
Jan 26 15:29:14 localhost kernel: CPU topo: Max. logical dies:       8
Jan 26 15:29:14 localhost kernel: CPU topo: Max. dies per package:   1
Jan 26 15:29:14 localhost kernel: CPU topo: Max. threads per core:   1
Jan 26 15:29:14 localhost kernel: CPU topo: Num. cores per package:     1
Jan 26 15:29:14 localhost kernel: CPU topo: Num. threads per package:   1
Jan 26 15:29:14 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 26 15:29:14 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 26 15:29:14 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 26 15:29:14 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 26 15:29:14 localhost kernel: Booting paravirtualized kernel on KVM
Jan 26 15:29:14 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 26 15:29:14 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 26 15:29:14 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 26 15:29:14 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 26 15:29:14 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 26 15:29:14 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 26 15:29:14 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 15:29:14 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 26 15:29:14 localhost kernel: random: crng init done
Jan 26 15:29:14 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 26 15:29:14 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 26 15:29:14 localhost kernel: Fallback order for Node 0: 0 
Jan 26 15:29:14 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 26 15:29:14 localhost kernel: Policy zone: Normal
Jan 26 15:29:14 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 26 15:29:14 localhost kernel: software IO TLB: area num 8.
Jan 26 15:29:14 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 26 15:29:14 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 26 15:29:14 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 26 15:29:14 localhost kernel: Dynamic Preempt: voluntary
Jan 26 15:29:14 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 26 15:29:14 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 26 15:29:14 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 26 15:29:14 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 26 15:29:14 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 26 15:29:14 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 26 15:29:14 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 26 15:29:14 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 26 15:29:14 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 15:29:14 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 15:29:14 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 15:29:14 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 26 15:29:14 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 26 15:29:14 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 26 15:29:14 localhost kernel: Console: colour VGA+ 80x25
Jan 26 15:29:14 localhost kernel: printk: console [ttyS0] enabled
Jan 26 15:29:14 localhost kernel: ACPI: Core revision 20230331
Jan 26 15:29:14 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 26 15:29:14 localhost kernel: x2apic enabled
Jan 26 15:29:14 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 26 15:29:14 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 26 15:29:14 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 26 15:29:14 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 26 15:29:14 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 26 15:29:14 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 26 15:29:14 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 26 15:29:14 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 26 15:29:14 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 26 15:29:14 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 26 15:29:14 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 26 15:29:14 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 26 15:29:14 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 26 15:29:14 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 26 15:29:14 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 26 15:29:14 localhost kernel: x86/bugs: return thunk changed
Jan 26 15:29:14 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 26 15:29:14 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 26 15:29:14 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 26 15:29:14 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 26 15:29:14 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 26 15:29:14 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 26 15:29:14 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 26 15:29:14 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 26 15:29:14 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 26 15:29:14 localhost kernel: landlock: Up and running.
Jan 26 15:29:14 localhost kernel: Yama: becoming mindful.
Jan 26 15:29:14 localhost kernel: SELinux:  Initializing.
Jan 26 15:29:14 localhost kernel: LSM support for eBPF active
Jan 26 15:29:14 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 15:29:14 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 15:29:14 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 26 15:29:14 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 26 15:29:14 localhost kernel: ... version:                0
Jan 26 15:29:14 localhost kernel: ... bit width:              48
Jan 26 15:29:14 localhost kernel: ... generic registers:      6
Jan 26 15:29:14 localhost kernel: ... value mask:             0000ffffffffffff
Jan 26 15:29:14 localhost kernel: ... max period:             00007fffffffffff
Jan 26 15:29:14 localhost kernel: ... fixed-purpose events:   0
Jan 26 15:29:14 localhost kernel: ... event mask:             000000000000003f
Jan 26 15:29:14 localhost kernel: signal: max sigframe size: 1776
Jan 26 15:29:14 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 26 15:29:14 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 26 15:29:14 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 26 15:29:14 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 26 15:29:14 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 26 15:29:14 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 26 15:29:14 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 26 15:29:14 localhost kernel: node 0 deferred pages initialised in 22ms
Jan 26 15:29:14 localhost kernel: Memory: 7763888K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 26 15:29:14 localhost kernel: devtmpfs: initialized
Jan 26 15:29:14 localhost kernel: x86/mm: Memory block size: 128MB
Jan 26 15:29:14 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 26 15:29:14 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 26 15:29:14 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 26 15:29:14 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 26 15:29:14 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 26 15:29:14 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 26 15:29:14 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 26 15:29:14 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 26 15:29:14 localhost kernel: audit: type=2000 audit(1769441351.394:1): state=initialized audit_enabled=0 res=1
Jan 26 15:29:14 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 26 15:29:14 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 26 15:29:14 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 26 15:29:14 localhost kernel: cpuidle: using governor menu
Jan 26 15:29:14 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 26 15:29:14 localhost kernel: PCI: Using configuration type 1 for base access
Jan 26 15:29:14 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 26 15:29:14 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 26 15:29:14 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 26 15:29:14 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 26 15:29:14 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 26 15:29:14 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 26 15:29:14 localhost kernel: Demotion targets for Node 0: null
Jan 26 15:29:14 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 26 15:29:14 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 26 15:29:14 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 26 15:29:14 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 26 15:29:14 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 26 15:29:14 localhost kernel: ACPI: Interpreter enabled
Jan 26 15:29:14 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 26 15:29:14 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 26 15:29:14 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 26 15:29:14 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 26 15:29:14 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 26 15:29:14 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 26 15:29:14 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [3] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [4] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [5] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [6] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [7] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [8] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [9] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [10] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [11] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [12] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [13] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [14] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [15] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [16] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [17] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [18] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [19] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [20] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [21] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [22] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [23] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [24] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [25] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [26] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [27] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [28] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [29] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [30] registered
Jan 26 15:29:14 localhost kernel: acpiphp: Slot [31] registered
Jan 26 15:29:14 localhost kernel: PCI host bridge to bus 0000:00
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 26 15:29:14 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 26 15:29:14 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 26 15:29:14 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 15:29:14 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 26 15:29:14 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 26 15:29:14 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 26 15:29:14 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 26 15:29:14 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 26 15:29:14 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 26 15:29:14 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 26 15:29:14 localhost kernel: iommu: Default domain type: Translated
Jan 26 15:29:14 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 26 15:29:14 localhost kernel: SCSI subsystem initialized
Jan 26 15:29:14 localhost kernel: ACPI: bus type USB registered
Jan 26 15:29:14 localhost kernel: usbcore: registered new interface driver usbfs
Jan 26 15:29:14 localhost kernel: usbcore: registered new interface driver hub
Jan 26 15:29:14 localhost kernel: usbcore: registered new device driver usb
Jan 26 15:29:14 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 26 15:29:14 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 26 15:29:14 localhost kernel: PTP clock support registered
Jan 26 15:29:14 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 26 15:29:14 localhost kernel: NetLabel: Initializing
Jan 26 15:29:14 localhost kernel: NetLabel:  domain hash size = 128
Jan 26 15:29:14 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 26 15:29:14 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 26 15:29:14 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 26 15:29:14 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 26 15:29:14 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 26 15:29:14 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 26 15:29:14 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 26 15:29:14 localhost kernel: vgaarb: loaded
Jan 26 15:29:14 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 26 15:29:14 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 26 15:29:14 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 26 15:29:14 localhost kernel: pnp: PnP ACPI init
Jan 26 15:29:14 localhost kernel: pnp 00:03: [dma 2]
Jan 26 15:29:14 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 26 15:29:14 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 26 15:29:14 localhost kernel: NET: Registered PF_INET protocol family
Jan 26 15:29:14 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 26 15:29:14 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 26 15:29:14 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 26 15:29:14 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 26 15:29:14 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 26 15:29:14 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 26 15:29:14 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 26 15:29:14 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 15:29:14 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 15:29:14 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 26 15:29:14 localhost kernel: NET: Registered PF_XDP protocol family
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 26 15:29:14 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 26 15:29:14 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 26 15:29:14 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 26 15:29:14 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71805 usecs
Jan 26 15:29:14 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 26 15:29:14 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 26 15:29:14 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 26 15:29:14 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 26 15:29:14 localhost kernel: ACPI: bus type thunderbolt registered
Jan 26 15:29:14 localhost kernel: Initialise system trusted keyrings
Jan 26 15:29:14 localhost kernel: Key type blacklist registered
Jan 26 15:29:14 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 26 15:29:14 localhost kernel: zbud: loaded
Jan 26 15:29:14 localhost kernel: integrity: Platform Keyring initialized
Jan 26 15:29:14 localhost kernel: integrity: Machine keyring initialized
Jan 26 15:29:14 localhost kernel: Freeing initrd memory: 87956K
Jan 26 15:29:14 localhost kernel: NET: Registered PF_ALG protocol family
Jan 26 15:29:14 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 26 15:29:14 localhost kernel: Key type asymmetric registered
Jan 26 15:29:14 localhost kernel: Asymmetric key parser 'x509' registered
Jan 26 15:29:14 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 26 15:29:14 localhost kernel: io scheduler mq-deadline registered
Jan 26 15:29:14 localhost kernel: io scheduler kyber registered
Jan 26 15:29:14 localhost kernel: io scheduler bfq registered
Jan 26 15:29:14 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 26 15:29:14 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 26 15:29:14 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 26 15:29:14 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 26 15:29:14 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 26 15:29:14 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 26 15:29:14 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 26 15:29:14 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 26 15:29:14 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 26 15:29:14 localhost kernel: Non-volatile memory driver v1.3
Jan 26 15:29:14 localhost kernel: rdac: device handler registered
Jan 26 15:29:14 localhost kernel: hp_sw: device handler registered
Jan 26 15:29:14 localhost kernel: emc: device handler registered
Jan 26 15:29:14 localhost kernel: alua: device handler registered
Jan 26 15:29:14 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 26 15:29:14 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 26 15:29:14 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 26 15:29:14 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 26 15:29:14 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 26 15:29:14 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 26 15:29:14 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 26 15:29:14 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 26 15:29:14 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 26 15:29:14 localhost kernel: hub 1-0:1.0: USB hub found
Jan 26 15:29:14 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 26 15:29:14 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 26 15:29:14 localhost kernel: usbserial: USB Serial support registered for generic
Jan 26 15:29:14 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 26 15:29:14 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 26 15:29:14 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 26 15:29:14 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 26 15:29:14 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 26 15:29:14 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 26 15:29:14 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 26 15:29:14 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T15:29:13 UTC (1769441353)
Jan 26 15:29:14 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 26 15:29:14 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 26 15:29:14 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 26 15:29:14 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 26 15:29:14 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 26 15:29:14 localhost kernel: usbcore: registered new interface driver usbhid
Jan 26 15:29:14 localhost kernel: usbhid: USB HID core driver
Jan 26 15:29:14 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 26 15:29:14 localhost kernel: Initializing XFRM netlink socket
Jan 26 15:29:14 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 26 15:29:14 localhost kernel: Segment Routing with IPv6
Jan 26 15:29:14 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 26 15:29:14 localhost kernel: mpls_gso: MPLS GSO support
Jan 26 15:29:14 localhost kernel: IPI shorthand broadcast: enabled
Jan 26 15:29:14 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 26 15:29:14 localhost kernel: AES CTR mode by8 optimization enabled
Jan 26 15:29:14 localhost kernel: sched_clock: Marking stable (2418002830, 165044189)->(2834664121, -251617102)
Jan 26 15:29:14 localhost kernel: registered taskstats version 1
Jan 26 15:29:14 localhost kernel: Loading compiled-in X.509 certificates
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 26 15:29:14 localhost kernel: Demotion targets for Node 0: null
Jan 26 15:29:14 localhost kernel: page_owner is disabled
Jan 26 15:29:14 localhost kernel: Key type .fscrypt registered
Jan 26 15:29:14 localhost kernel: Key type fscrypt-provisioning registered
Jan 26 15:29:14 localhost kernel: Key type big_key registered
Jan 26 15:29:14 localhost kernel: Key type encrypted registered
Jan 26 15:29:14 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 26 15:29:14 localhost kernel: Loading compiled-in module X.509 certificates
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 15:29:14 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 26 15:29:14 localhost kernel: ima: No architecture policies found
Jan 26 15:29:14 localhost kernel: evm: Initialising EVM extended attributes:
Jan 26 15:29:14 localhost kernel: evm: security.selinux
Jan 26 15:29:14 localhost kernel: evm: security.SMACK64 (disabled)
Jan 26 15:29:14 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 26 15:29:14 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 26 15:29:14 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 26 15:29:14 localhost kernel: evm: security.apparmor (disabled)
Jan 26 15:29:14 localhost kernel: evm: security.ima
Jan 26 15:29:14 localhost kernel: evm: security.capability
Jan 26 15:29:14 localhost kernel: evm: HMAC attrs: 0x1
Jan 26 15:29:14 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 26 15:29:14 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 26 15:29:14 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 26 15:29:14 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 26 15:29:14 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 26 15:29:14 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 26 15:29:14 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 26 15:29:14 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 26 15:29:14 localhost kernel: Running certificate verification RSA selftest
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 26 15:29:14 localhost kernel: Running certificate verification ECDSA selftest
Jan 26 15:29:14 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 26 15:29:14 localhost kernel: clk: Disabling unused clocks
Jan 26 15:29:14 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 26 15:29:14 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 26 15:29:14 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 26 15:29:14 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 26 15:29:14 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 26 15:29:14 localhost kernel: Run /init as init process
Jan 26 15:29:14 localhost kernel:   with arguments:
Jan 26 15:29:14 localhost kernel:     /init
Jan 26 15:29:14 localhost kernel:   with environment:
Jan 26 15:29:14 localhost kernel:     HOME=/
Jan 26 15:29:14 localhost kernel:     TERM=linux
Jan 26 15:29:14 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 26 15:29:14 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 15:29:14 localhost systemd[1]: Detected virtualization kvm.
Jan 26 15:29:14 localhost systemd[1]: Detected architecture x86-64.
Jan 26 15:29:14 localhost systemd[1]: Running in initrd.
Jan 26 15:29:14 localhost systemd[1]: No hostname configured, using default hostname.
Jan 26 15:29:14 localhost systemd[1]: Hostname set to <localhost>.
Jan 26 15:29:14 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 26 15:29:14 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 26 15:29:14 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 15:29:14 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 26 15:29:14 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 26 15:29:14 localhost systemd[1]: Reached target Local File Systems.
Jan 26 15:29:14 localhost systemd[1]: Reached target Path Units.
Jan 26 15:29:14 localhost systemd[1]: Reached target Slice Units.
Jan 26 15:29:14 localhost systemd[1]: Reached target Swaps.
Jan 26 15:29:14 localhost systemd[1]: Reached target Timer Units.
Jan 26 15:29:14 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 15:29:14 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 26 15:29:14 localhost systemd[1]: Listening on Journal Socket.
Jan 26 15:29:14 localhost systemd[1]: Listening on udev Control Socket.
Jan 26 15:29:14 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 26 15:29:14 localhost systemd[1]: Reached target Socket Units.
Jan 26 15:29:14 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 26 15:29:14 localhost systemd[1]: Starting Journal Service...
Jan 26 15:29:14 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 15:29:14 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 26 15:29:14 localhost systemd[1]: Starting Create System Users...
Jan 26 15:29:14 localhost systemd[1]: Starting Setup Virtual Console...
Jan 26 15:29:14 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 15:29:14 localhost systemd-journald[301]: Journal started
Jan 26 15:29:14 localhost systemd-journald[301]: Runtime Journal (/run/log/journal/07141d90ae2c484891d9402155316ee1) is 8.0M, max 153.6M, 145.6M free.
Jan 26 15:29:14 localhost systemd[1]: Started Journal Service.
Jan 26 15:29:14 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 26 15:29:14 localhost systemd-sysusers[305]: Creating group 'users' with GID 100.
Jan 26 15:29:14 localhost systemd-sysusers[305]: Creating group 'dbus' with GID 81.
Jan 26 15:29:14 localhost systemd-sysusers[305]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 26 15:29:14 localhost systemd[1]: Finished Create System Users.
Jan 26 15:29:14 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 15:29:14 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 15:29:14 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 15:29:14 localhost systemd[1]: Finished Setup Virtual Console.
Jan 26 15:29:14 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 26 15:29:14 localhost systemd[1]: Starting dracut cmdline hook...
Jan 26 15:29:14 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 15:29:14 localhost dracut-cmdline[322]: dracut-9 dracut-057-102.git20250818.el9
Jan 26 15:29:14 localhost dracut-cmdline[322]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 15:29:14 localhost systemd[1]: Finished dracut cmdline hook.
Jan 26 15:29:14 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 26 15:29:15 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 26 15:29:15 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 26 15:29:15 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 26 15:29:15 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 26 15:29:15 localhost kernel: RPC: Registered udp transport module.
Jan 26 15:29:15 localhost kernel: RPC: Registered tcp transport module.
Jan 26 15:29:15 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 26 15:29:15 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 26 15:29:15 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 26 15:29:15 localhost rpc.statd[444]: Initializing NSM state
Jan 26 15:29:15 localhost rpc.idmapd[449]: Setting log level to 0
Jan 26 15:29:15 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 26 15:29:15 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 15:29:15 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 15:29:15 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 15:29:15 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 26 15:29:15 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 26 15:29:15 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 26 15:29:15 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 26 15:29:15 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 15:29:15 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 26 15:29:15 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 15:29:15 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 15:29:15 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 26 15:29:15 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 15:29:15 localhost systemd[1]: Reached target Network.
Jan 26 15:29:15 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 15:29:15 localhost systemd[1]: Starting dracut initqueue hook...
Jan 26 15:29:15 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 26 15:29:15 localhost systemd[1]: Reached target System Initialization.
Jan 26 15:29:15 localhost systemd[1]: Reached target Basic System.
Jan 26 15:29:15 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 26 15:29:15 localhost kernel: libata version 3.00 loaded.
Jan 26 15:29:15 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 26 15:29:15 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 26 15:29:15 localhost kernel: scsi host0: ata_piix
Jan 26 15:29:15 localhost kernel:  vda: vda1
Jan 26 15:29:15 localhost kernel: scsi host1: ata_piix
Jan 26 15:29:15 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 26 15:29:15 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 26 15:29:15 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 15:29:15 localhost systemd[1]: Reached target Initrd Root Device.
Jan 26 15:29:15 localhost kernel: ata1: found unknown device (class 0)
Jan 26 15:29:15 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 26 15:29:15 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 26 15:29:15 localhost systemd-udevd[481]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:29:16 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 26 15:29:16 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 26 15:29:16 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 26 15:29:16 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 26 15:29:16 localhost systemd[1]: Finished dracut initqueue hook.
Jan 26 15:29:16 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 15:29:16 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 26 15:29:16 localhost systemd[1]: Reached target Remote File Systems.
Jan 26 15:29:16 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 26 15:29:16 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 26 15:29:16 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 26 15:29:16 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Jan 26 15:29:16 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 15:29:16 localhost systemd[1]: Mounting /sysroot...
Jan 26 15:29:16 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 26 15:29:16 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 26 15:29:16 localhost kernel: XFS (vda1): Ending clean mount
Jan 26 15:29:16 localhost systemd[1]: Mounted /sysroot.
Jan 26 15:29:16 localhost systemd[1]: Reached target Initrd Root File System.
Jan 26 15:29:16 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 26 15:29:16 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 26 15:29:16 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 26 15:29:16 localhost systemd[1]: Reached target Initrd File Systems.
Jan 26 15:29:16 localhost systemd[1]: Reached target Initrd Default Target.
Jan 26 15:29:16 localhost systemd[1]: Starting dracut mount hook...
Jan 26 15:29:16 localhost systemd[1]: Finished dracut mount hook.
Jan 26 15:29:16 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 26 15:29:17 localhost rpc.idmapd[449]: exiting on signal 15
Jan 26 15:29:17 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 26 15:29:17 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 26 15:29:17 localhost systemd[1]: Stopped target Network.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Timer Units.
Jan 26 15:29:17 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 26 15:29:17 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Basic System.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Path Units.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Remote File Systems.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Slice Units.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Socket Units.
Jan 26 15:29:17 localhost systemd[1]: Stopped target System Initialization.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Local File Systems.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Swaps.
Jan 26 15:29:17 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut mount hook.
Jan 26 15:29:17 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 26 15:29:17 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 26 15:29:17 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 26 15:29:17 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 26 15:29:17 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 26 15:29:17 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 26 15:29:17 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 26 15:29:17 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 26 15:29:17 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 26 15:29:17 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 26 15:29:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 26 15:29:17 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 26 15:29:17 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Closed udev Control Socket.
Jan 26 15:29:17 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Closed udev Kernel Socket.
Jan 26 15:29:17 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 26 15:29:17 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 26 15:29:17 localhost systemd[1]: Starting Cleanup udev Database...
Jan 26 15:29:17 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 26 15:29:17 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 26 15:29:17 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Stopped Create System Users.
Jan 26 15:29:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 26 15:29:17 localhost systemd[1]: Finished Cleanup udev Database.
Jan 26 15:29:17 localhost systemd[1]: Reached target Switch Root.
Jan 26 15:29:17 localhost systemd[1]: Starting Switch Root...
Jan 26 15:29:17 localhost systemd[1]: Switching root.
Jan 26 15:29:17 localhost systemd-journald[301]: Journal stopped
Jan 26 15:29:18 localhost systemd-journald[301]: Received SIGTERM from PID 1 (systemd).
Jan 26 15:29:18 localhost kernel: audit: type=1404 audit(1769441357.268:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability open_perms=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:29:18 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:29:18 localhost kernel: audit: type=1403 audit(1769441357.400:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 26 15:29:18 localhost systemd[1]: Successfully loaded SELinux policy in 135.696ms.
Jan 26 15:29:18 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.293ms.
Jan 26 15:29:18 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 15:29:18 localhost systemd[1]: Detected virtualization kvm.
Jan 26 15:29:18 localhost systemd[1]: Detected architecture x86-64.
Jan 26 15:29:18 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:29:18 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Stopped Switch Root.
Jan 26 15:29:18 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 26 15:29:18 localhost systemd[1]: Created slice Slice /system/getty.
Jan 26 15:29:18 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 26 15:29:18 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 26 15:29:18 localhost systemd[1]: Created slice User and Session Slice.
Jan 26 15:29:18 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 15:29:18 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 26 15:29:18 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 26 15:29:18 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 26 15:29:18 localhost systemd[1]: Stopped target Switch Root.
Jan 26 15:29:18 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 26 15:29:18 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 26 15:29:18 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 26 15:29:18 localhost systemd[1]: Reached target Path Units.
Jan 26 15:29:18 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 26 15:29:18 localhost systemd[1]: Reached target Slice Units.
Jan 26 15:29:18 localhost systemd[1]: Reached target Swaps.
Jan 26 15:29:18 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 26 15:29:18 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 26 15:29:18 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 26 15:29:18 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 26 15:29:18 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 26 15:29:18 localhost systemd[1]: Listening on udev Control Socket.
Jan 26 15:29:18 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 26 15:29:18 localhost systemd[1]: Mounting Huge Pages File System...
Jan 26 15:29:18 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 26 15:29:18 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 26 15:29:18 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 26 15:29:18 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 15:29:18 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 26 15:29:18 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 15:29:18 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 26 15:29:18 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 26 15:29:18 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 26 15:29:18 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 26 15:29:18 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 26 15:29:18 localhost systemd[1]: Stopped Journal Service.
Jan 26 15:29:18 localhost systemd[1]: Starting Journal Service...
Jan 26 15:29:18 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 15:29:18 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 26 15:29:18 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 15:29:18 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 26 15:29:18 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 26 15:29:18 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 26 15:29:18 localhost kernel: fuse: init (API version 7.37)
Jan 26 15:29:18 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 26 15:29:18 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 26 15:29:18 localhost systemd-journald[680]: Journal started
Jan 26 15:29:18 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 15:29:18 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 26 15:29:18 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 26 15:29:18 localhost kernel: ACPI: bus type drm_connector registered
Jan 26 15:29:18 localhost systemd[1]: Started Journal Service.
Jan 26 15:29:18 localhost systemd[1]: Mounted Huge Pages File System.
Jan 26 15:29:18 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 26 15:29:18 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 26 15:29:18 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 26 15:29:18 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 15:29:18 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 15:29:18 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 26 15:29:18 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 26 15:29:18 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 26 15:29:18 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 26 15:29:18 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 26 15:29:18 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 26 15:29:18 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 26 15:29:18 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 26 15:29:18 localhost systemd[1]: Mounting FUSE Control File System...
Jan 26 15:29:18 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 15:29:18 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 26 15:29:18 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 26 15:29:18 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 26 15:29:18 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 26 15:29:18 localhost systemd[1]: Starting Create System Users...
Jan 26 15:29:18 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 15:29:18 localhost systemd-journald[680]: Received client request to flush runtime journal.
Jan 26 15:29:18 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 26 15:29:18 localhost systemd[1]: Mounted FUSE Control File System.
Jan 26 15:29:18 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 26 15:29:18 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 26 15:29:18 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 15:29:18 localhost systemd[1]: Finished Create System Users.
Jan 26 15:29:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 15:29:19 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 15:29:19 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 26 15:29:19 localhost systemd[1]: Reached target Local File Systems.
Jan 26 15:29:19 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 26 15:29:19 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 26 15:29:19 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 26 15:29:19 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 26 15:29:19 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 26 15:29:19 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 26 15:29:19 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 15:29:19 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Jan 26 15:29:19 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 26 15:29:19 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 15:29:19 localhost systemd[1]: Starting Security Auditing Service...
Jan 26 15:29:19 localhost systemd[1]: Starting RPC Bind...
Jan 26 15:29:19 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 26 15:29:19 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 26 15:29:19 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 26 15:29:19 localhost systemd[1]: Started RPC Bind.
Jan 26 15:29:19 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 26 15:29:19 localhost augenrules[709]: /sbin/augenrules: No change
Jan 26 15:29:19 localhost augenrules[724]: No rules
Jan 26 15:29:19 localhost augenrules[724]: enabled 1
Jan 26 15:29:19 localhost augenrules[724]: failure 1
Jan 26 15:29:19 localhost augenrules[724]: pid 704
Jan 26 15:29:19 localhost augenrules[724]: rate_limit 0
Jan 26 15:29:19 localhost augenrules[724]: backlog_limit 8192
Jan 26 15:29:19 localhost augenrules[724]: lost 0
Jan 26 15:29:19 localhost augenrules[724]: backlog 3
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time 60000
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 26 15:29:19 localhost augenrules[724]: enabled 1
Jan 26 15:29:19 localhost augenrules[724]: failure 1
Jan 26 15:29:19 localhost augenrules[724]: pid 704
Jan 26 15:29:19 localhost augenrules[724]: rate_limit 0
Jan 26 15:29:19 localhost augenrules[724]: backlog_limit 8192
Jan 26 15:29:19 localhost augenrules[724]: lost 0
Jan 26 15:29:19 localhost augenrules[724]: backlog 3
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time 60000
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 26 15:29:19 localhost augenrules[724]: enabled 1
Jan 26 15:29:19 localhost augenrules[724]: failure 1
Jan 26 15:29:19 localhost augenrules[724]: pid 704
Jan 26 15:29:19 localhost augenrules[724]: rate_limit 0
Jan 26 15:29:19 localhost augenrules[724]: backlog_limit 8192
Jan 26 15:29:19 localhost augenrules[724]: lost 0
Jan 26 15:29:19 localhost augenrules[724]: backlog 3
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time 60000
Jan 26 15:29:19 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 26 15:29:19 localhost systemd[1]: Started Security Auditing Service.
Jan 26 15:29:19 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 26 15:29:19 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 26 15:29:19 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 26 15:29:20 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 26 15:29:20 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 15:29:20 localhost systemd[1]: Starting Update is Completed...
Jan 26 15:29:20 localhost systemd[1]: Finished Update is Completed.
Jan 26 15:29:20 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 15:29:20 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 15:29:20 localhost systemd[1]: Reached target System Initialization.
Jan 26 15:29:20 localhost systemd[1]: Started dnf makecache --timer.
Jan 26 15:29:20 localhost systemd[1]: Started Daily rotation of log files.
Jan 26 15:29:20 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 26 15:29:20 localhost systemd[1]: Reached target Timer Units.
Jan 26 15:29:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 15:29:20 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 26 15:29:20 localhost systemd[1]: Reached target Socket Units.
Jan 26 15:29:20 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 26 15:29:20 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 15:29:20 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 26 15:29:20 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 15:29:20 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 15:29:20 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 15:29:20 localhost systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:29:20 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 26 15:29:20 localhost systemd[1]: Reached target Basic System.
Jan 26 15:29:20 localhost dbus-broker-lau[761]: Ready
Jan 26 15:29:20 localhost systemd[1]: Starting NTP client/server...
Jan 26 15:29:20 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 26 15:29:20 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 26 15:29:20 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 26 15:29:20 localhost systemd[1]: Started irqbalance daemon.
Jan 26 15:29:20 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 26 15:29:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:29:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:29:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:29:20 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 26 15:29:20 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 26 15:29:20 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 26 15:29:20 localhost systemd[1]: Starting User Login Management...
Jan 26 15:29:20 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 26 15:29:20 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 26 15:29:20 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 26 15:29:20 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 26 15:29:20 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 26 15:29:20 localhost chronyd[801]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 15:29:20 localhost chronyd[801]: Loaded 0 symmetric keys
Jan 26 15:29:20 localhost chronyd[801]: Using right/UTC timezone to obtain leap second data
Jan 26 15:29:20 localhost chronyd[801]: Loaded seccomp filter (level 2)
Jan 26 15:29:20 localhost systemd[1]: Started NTP client/server.
Jan 26 15:29:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 26 15:29:20 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 26 15:29:20 localhost systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 15:29:20 localhost systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 15:29:20 localhost kernel: kvm_amd: TSC scaling supported
Jan 26 15:29:20 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 26 15:29:20 localhost kernel: kvm_amd: Nested Paging enabled
Jan 26 15:29:20 localhost kernel: kvm_amd: LBR virtualization supported
Jan 26 15:29:20 localhost systemd-logind[788]: New seat seat0.
Jan 26 15:29:20 localhost systemd[1]: Started User Login Management.
Jan 26 15:29:20 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 26 15:29:20 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 26 15:29:20 localhost kernel: Console: switching to colour dummy device 80x25
Jan 26 15:29:20 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 26 15:29:20 localhost kernel: [drm] features: -context_init
Jan 26 15:29:20 localhost kernel: [drm] number of scanouts: 1
Jan 26 15:29:20 localhost kernel: [drm] number of cap sets: 0
Jan 26 15:29:20 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 26 15:29:20 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 26 15:29:20 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 26 15:29:20 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 26 15:29:20 localhost iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Jan 26 15:29:20 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 26 15:29:21 localhost cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 26 Jan 2026 15:29:21 +0000. Up 9.93 seconds.
Jan 26 15:29:21 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 26 15:29:21 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 26 15:29:21 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpx5yaozxj.mount: Deactivated successfully.
Jan 26 15:29:21 localhost systemd[1]: Starting Hostname Service...
Jan 26 15:29:21 localhost systemd[1]: Started Hostname Service.
Jan 26 15:29:21 np0005595918.novalocal systemd-hostnamed[855]: Hostname set to <np0005595918.novalocal> (static)
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Reached target Preparation for Network.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Starting Network Manager...
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7182] NetworkManager (version 1.54.3-2.el9) is starting... (boot:5124ce38-efa8-40f4-a4ab-032935f2d131)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7189] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7278] manager[0x55732cf1e000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7316] hostname: hostname: using hostnamed
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7316] hostname: static hostname changed from (none) to "np0005595918.novalocal"
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7492] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7616] manager[0x55732cf1e000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7617] manager[0x55732cf1e000]: rfkill: WWAN hardware radio set enabled
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7668] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7669] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7670] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7670] manager: Networking is enabled by state file
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7672] settings: Loaded settings plugin: keyfile (internal)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7686] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7708] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7720] dhcp: init: Using DHCP client 'internal'
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7723] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7736] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7745] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7753] device (lo): Activation: starting connection 'lo' (8f11ff48-691a-496d-8a19-1570898b30be)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7764] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7766] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7799] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7803] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7806] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7808] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7810] device (eth0): carrier: link connected
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7814] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7819] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7831] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7838] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7840] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7842] manager: NetworkManager state is now CONNECTING
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7844] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7852] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.7856] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Started Network Manager.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Reached target Network.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8000] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8013] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8036] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Reached target NFS client services.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8198] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8202] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8203] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Reached target Remote File Systems.
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8210] device (lo): Activation: successful, device activated.
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8215] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8219] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8222] device (eth0): Activation: successful, device activated.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8226] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 15:29:21 np0005595918.novalocal NetworkManager[859]: <info>  [1769441361.8228] manager: startup complete
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 26 15:29:21 np0005595918.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 26 Jan 2026 15:29:22 +0000. Up 10.95 seconds.
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.142         | 255.255.255.0 | global | fa:16:3e:bd:95:81 |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:febd:9581/64 |       .       |  link  | fa:16:3e:bd:95:81 |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 26 15:29:22 np0005595918.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 26 15:29:22 np0005595918.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Generating public/private rsa key pair.
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key fingerprint is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: SHA256:FPwZ/JfxybKDTogGJoGc9NeG67ZD6TThRTwXfKeN6CI root@np0005595918.novalocal
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key's randomart image is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +---[RSA 3072]----+
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |o.o   ..oo.      |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: | +..  o+.+o. ..  |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |   ..o.o+.o+= .+.|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |   ..+oo .oo.ooo.|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |    +.= S . ..o  |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |    .E + o o o   |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |    +o+ . o   .  |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |    .o.    .     |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |     ..          |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key fingerprint is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: SHA256:69jR4udRD6JX7dcVBmt57B7kjwmHCLg2qrkaGq2y/QA root@np0005595918.novalocal
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key's randomart image is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +---[ECDSA 256]---+
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |             .   |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |       .      =  |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |      . .    + * |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |       . . ..o* .|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |E     + S o * o+.|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: | o   o . + + *.o=|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |o o .   = +   =.+|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |o= +   = +..   . |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |*o=o. . +o.      |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key fingerprint is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: SHA256:4wc41heanIN4u+q3BKu+bl0d/+B9vYbltyo3N1ZlPc4 root@np0005595918.novalocal
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: The key's randomart image is:
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +--[ED25519 256]--+
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |                 |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |                 |
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |          .     .|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |     . =.+ .   .+|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |    o *.So.   o.o|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |     =.+.=o    E.|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |   ...o ...+  +..|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: |  ...... ...o+.*o|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: | +=oooo.    oo*o=|
Jan 26 15:29:23 np0005595918.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Reached target Network is Online.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting System Logging Service...
Jan 26 15:29:23 np0005595918.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Permit User Sessions...
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Finished Permit User Sessions.
Jan 26 15:29:23 np0005595918.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 26 15:29:23 np0005595918.novalocal sshd[1007]: Server listening on :: port 22.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started Command Scheduler.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started Getty on tty1.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 26 15:29:23 np0005595918.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 26 15:29:23 np0005595918.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Reached target Login Prompts.
Jan 26 15:29:23 np0005595918.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Jan 26 15:29:23 np0005595918.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 26 15:29:23 np0005595918.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 26 15:29:23 np0005595918.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Started System Logging Service.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Reached target Multi-User System.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1025]: Unable to negotiate with 38.102.83.114 port 52618: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1040]: Unable to negotiate with 38.102.83.114 port 52648: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1050]: Unable to negotiate with 38.102.83.114 port 52652: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 26 15:29:23 np0005595918.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1011]: Connection closed by 38.102.83.114 port 52608 [preauth]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1068]: Connection reset by 38.102.83.114 port 52672 [preauth]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1077]: Unable to negotiate with 38.102.83.114 port 52678: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1031]: Connection closed by 38.102.83.114 port 52634 [preauth]
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1079]: Unable to negotiate with 38.102.83.114 port 52688: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 26 15:29:23 np0005595918.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 26 15:29:23 np0005595918.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 26 15:29:23 np0005595918.novalocal sshd-session[1057]: Connection closed by 38.102.83.114 port 52658 [preauth]
Jan 26 15:29:23 np0005595918.novalocal cloud-init[1138]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 26 Jan 2026 15:29:23 +0000. Up 12.50 seconds.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 26 15:29:23 np0005595918.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 26 15:29:24 np0005595918.novalocal dracut[1284]: dracut-057-102.git20250818.el9
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1361]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 26 Jan 2026 15:29:24 +0000. Up 13.14 seconds.
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1377]: #############################################################
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1378]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1383]: 256 SHA256:69jR4udRD6JX7dcVBmt57B7kjwmHCLg2qrkaGq2y/QA root@np0005595918.novalocal (ECDSA)
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1385]: 256 SHA256:4wc41heanIN4u+q3BKu+bl0d/+B9vYbltyo3N1ZlPc4 root@np0005595918.novalocal (ED25519)
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1390]: 3072 SHA256:FPwZ/JfxybKDTogGJoGc9NeG67ZD6TThRTwXfKeN6CI root@np0005595918.novalocal (RSA)
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1391]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1392]: #############################################################
Jan 26 15:29:24 np0005595918.novalocal cloud-init[1361]: Cloud-init v. 24.4-8.el9 finished at Mon, 26 Jan 2026 15:29:24 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.35 seconds
Jan 26 15:29:24 np0005595918.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 26 15:29:24 np0005595918.novalocal systemd[1]: Reached target Cloud-init target.
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 15:29:24 np0005595918.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: memstrack is not available
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: memstrack is not available
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 15:29:25 np0005595918.novalocal dracut[1286]: *** Including module: systemd ***
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: fips ***
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: systemd-initrd ***
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: i18n ***
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: drm ***
Jan 26 15:29:26 np0005595918.novalocal chronyd[801]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Jan 26 15:29:26 np0005595918.novalocal chronyd[801]: System clock TAI offset set to 37 seconds
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: prefixdevname ***
Jan 26 15:29:26 np0005595918.novalocal dracut[1286]: *** Including module: kernel-modules ***
Jan 26 15:29:27 np0005595918.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: kernel-modules-extra ***
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: qemu ***
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: fstab-sys ***
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: rootfs-block ***
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: terminfo ***
Jan 26 15:29:27 np0005595918.novalocal dracut[1286]: *** Including module: udev-rules ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: Skipping udev rule: 91-permissions.rules
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: virtiofs ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: dracut-systemd ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: usrmount ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: base ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: fs-lib ***
Jan 26 15:29:28 np0005595918.novalocal dracut[1286]: *** Including module: kdumpbase ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:   microcode_ctl module: mangling fw_dir
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Including module: openssl ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Including module: shutdown ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Including module: squash ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Including modules done ***
Jan 26 15:29:29 np0005595918.novalocal dracut[1286]: *** Installing kernel module dependencies ***
Jan 26 15:29:30 np0005595918.novalocal dracut[1286]: *** Installing kernel module dependencies done ***
Jan 26 15:29:30 np0005595918.novalocal dracut[1286]: *** Resolving executable dependencies ***
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 25 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 31 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 28 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 32 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 30 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 26 15:29:31 np0005595918.novalocal irqbalance[787]: IRQ 29 affinity is now unmanaged
Jan 26 15:29:31 np0005595918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 15:29:31 np0005595918.novalocal dracut[1286]: *** Resolving executable dependencies done ***
Jan 26 15:29:32 np0005595918.novalocal dracut[1286]: *** Generating early-microcode cpio image ***
Jan 26 15:29:32 np0005595918.novalocal dracut[1286]: *** Store current command line parameters ***
Jan 26 15:29:32 np0005595918.novalocal dracut[1286]: Stored kernel commandline:
Jan 26 15:29:32 np0005595918.novalocal dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Jan 26 15:29:32 np0005595918.novalocal dracut[1286]: *** Install squash loader ***
Jan 26 15:29:33 np0005595918.novalocal dracut[1286]: *** Squashing the files inside the initramfs ***
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: *** Squashing the files inside the initramfs done ***
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: *** Hardlinking files ***
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Mode:           real
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Files:          50
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Linked:         0 files
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Compared:       0 xattrs
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Compared:       0 files
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Saved:          0 B
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: Duration:       0.000447 seconds
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: *** Hardlinking files done ***
Jan 26 15:29:34 np0005595918.novalocal dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 26 15:29:36 np0005595918.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 26 15:29:36 np0005595918.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 26 15:29:36 np0005595918.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 26 15:29:36 np0005595918.novalocal systemd[1]: Startup finished in 2.857s (kernel) + 3.230s (initrd) + 18.919s (userspace) = 25.006s.
Jan 26 15:29:41 np0005595918.novalocal sshd-session[4302]: Accepted publickey for zuul from 38.102.83.114 port 52422 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 26 15:29:41 np0005595918.novalocal systemd-logind[788]: New session 1 of user zuul.
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Queued start job for default target Main User Target.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Created slice User Application Slice.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Reached target Paths.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Reached target Timers.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Starting D-Bus User Message Bus Socket...
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Starting Create User's Volatile Files and Directories...
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Listening on D-Bus User Message Bus Socket.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Finished Create User's Volatile Files and Directories.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Reached target Sockets.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Reached target Basic System.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Reached target Main User Target.
Jan 26 15:29:41 np0005595918.novalocal systemd[4306]: Startup finished in 122ms.
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 26 15:29:41 np0005595918.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 26 15:29:41 np0005595918.novalocal sshd-session[4302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:29:42 np0005595918.novalocal python3[4388]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:29:43 np0005595918.novalocal sshd-session[4393]: Invalid user admin from 45.148.10.121 port 55006
Jan 26 15:29:43 np0005595918.novalocal sshd-session[4393]: Connection closed by invalid user admin 45.148.10.121 port 55006 [preauth]
Jan 26 15:29:44 np0005595918.novalocal python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:29:50 np0005595918.novalocal python3[4476]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:29:50 np0005595918.novalocal python3[4516]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 26 15:29:51 np0005595918.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 15:29:53 np0005595918.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWoFtgRtFXjtpWV+aeUv7OAiOO5zBaXsmxKs7mpcofw6e72CdNppjMIPj1Um3jklCxT2PF6HeQUzuMrBN68bPvSMJJ/VCyeKpLb9RhNgkV1e31JoWXFYHMdZQuwyELaVG7W/2kUYsRknU4ztDYPvJH69l17sn3UtirBZrnupws8VvHK7eWGcpAJAF+Ns6WKDGi7vVi1YUInw/498K/RLtUCmMhrvjhhb2BEuijtk9sWMKhScL9orhjr5vZgG3OyNLFqvl7XWi4pERWtVjYkoprxFEIdxyVyBo3h505qybl8tjZqfewLPcsCQTSVkkELXIN4sl3fN6MfjVSDCFhDsiBhK/P/S3hmXvrv79AI3KxIP3HFCNZ4cYHgHQdOHzG0RD/iHONH12smlqEKGk6oMhI7Nr9MHs99a98aOz/QeR2XxgDg2Kz2FAzmjL7P9xsOulsb7iNTVa5gMJDzwGnRXdG5EkJu6D9wHqfaDPg40ZNFK5h9NJz3bkYHpQxmXXD9w8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:29:53 np0005595918.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:29:53 np0005595918.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:29:54 np0005595918.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769441393.62108-207-2807775099141/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=019c1a07921f4144913a517f84c6219b_id_rsa follow=False checksum=b291817db3bf860552f8c0e81683d4f3de844850 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:29:54 np0005595918.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:29:55 np0005595918.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769441394.5564418-240-45481133944255/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=019c1a07921f4144913a517f84c6219b_id_rsa.pub follow=False checksum=96e6903876fee52000fcf5bba1dc2bafd5b83608 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:29:56 np0005595918.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 26 15:29:57 np0005595918.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:29:59 np0005595918.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 26 15:30:00 np0005595918.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:00 np0005595918.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:00 np0005595918.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:01 np0005595918.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:01 np0005595918.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:01 np0005595918.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:03 np0005595918.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqnphftohavxugwjnfwcjoqubghkpcn ; /usr/bin/python3'
Jan 26 15:30:03 np0005595918.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:03 np0005595918.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:03 np0005595918.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:03 np0005595918.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvaiqzwdxylbruuoksitjvxyjvfovexb ; /usr/bin/python3'
Jan 26 15:30:03 np0005595918.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:03 np0005595918.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:30:03 np0005595918.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:04 np0005595918.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orczqmpoxqcndvxhizmuzkoyfdogxmlz ; /usr/bin/python3'
Jan 26 15:30:04 np0005595918.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:04 np0005595918.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441403.3286266-21-221450294127076/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:04 np0005595918.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:04 np0005595918.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:05 np0005595918.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:05 np0005595918.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:05 np0005595918.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:06 np0005595918.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:06 np0005595918.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:06 np0005595918.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:06 np0005595918.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:07 np0005595918.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:07 np0005595918.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:07 np0005595918.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:08 np0005595918.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:08 np0005595918.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:08 np0005595918.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:08 np0005595918.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:09 np0005595918.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:09 np0005595918.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:09 np0005595918.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:10 np0005595918.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:10 np0005595918.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:10 np0005595918.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:10 np0005595918.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:11 np0005595918.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:11 np0005595918.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:11 np0005595918.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:11 np0005595918.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:30:14 np0005595918.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjtwdthocnsxboyzjtwshepjdyvktyp ; /usr/bin/python3'
Jan 26 15:30:14 np0005595918.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:14 np0005595918.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 15:30:14 np0005595918.novalocal systemd[1]: Starting Time & Date Service...
Jan 26 15:30:14 np0005595918.novalocal systemd[1]: Started Time & Date Service.
Jan 26 15:30:14 np0005595918.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 26 15:30:14 np0005595918.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:15 np0005595918.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphegaxxyvjmgftpbhsbwgahkfvsigxj ; /usr/bin/python3'
Jan 26 15:30:15 np0005595918.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:15 np0005595918.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:15 np0005595918.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:16 np0005595918.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:30:16 np0005595918.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769441416.0511558-153-197129242456735/source _original_basename=tmp4_an00p_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:17 np0005595918.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:30:18 np0005595918.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769441416.852514-183-228596036811091/source _original_basename=tmpftmn5xuw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:18 np0005595918.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxbalztdfbzptbapcoacntqcqsvhxsc ; /usr/bin/python3'
Jan 26 15:30:18 np0005595918.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:18 np0005595918.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:30:18 np0005595918.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:19 np0005595918.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpjtcikwcbrmihhnovimbwjhiffacxfx ; /usr/bin/python3'
Jan 26 15:30:19 np0005595918.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:19 np0005595918.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769441418.6037152-231-89374935097257/source _original_basename=tmpf_41y4xy follow=False checksum=eb31a54ab353993df0881d335bb57aa163860e42 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:19 np0005595918.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:19 np0005595918.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:30:19 np0005595918.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:30:20 np0005595918.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsbinoobgpmdhmllkecpxwbrbjsazmxa ; /usr/bin/python3'
Jan 26 15:30:20 np0005595918.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:20 np0005595918.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:30:20 np0005595918.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:20 np0005595918.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amumozimuaonzxmgusnaurtijiwffmbe ; /usr/bin/python3'
Jan 26 15:30:20 np0005595918.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:20 np0005595918.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441420.3804383-273-31308331668693/source _original_basename=tmp6iwln0vv follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:20 np0005595918.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:21 np0005595918.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqdnjcsnowqeryxghxwumdcwdywnuzp ; /usr/bin/python3'
Jan 26 15:30:21 np0005595918.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:21 np0005595918.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-06eb-ed11-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:30:21 np0005595918.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:22 np0005595918.novalocal python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-06eb-ed11-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 26 15:30:23 np0005595918.novalocal python3[6924]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:42 np0005595918.novalocal sudo[6948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idsfyroingsvepyzmsgbswuzngtlrisq ; /usr/bin/python3'
Jan 26 15:30:42 np0005595918.novalocal sudo[6948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:30:42 np0005595918.novalocal python3[6950]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:30:42 np0005595918.novalocal sudo[6948]: pam_unix(sudo:session): session closed for user root
Jan 26 15:30:44 np0005595918.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 26 15:31:18 np0005595918.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 26 15:31:18 np0005595918.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.8922] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 15:31:18 np0005595918.novalocal systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9178] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9204] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9206] device (eth1): carrier: link connected
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9208] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9213] policy: auto-activating connection 'Wired connection 1' (d2f7cd60-7192-331c-9fdd-34ee6dbab928)
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9216] device (eth1): Activation: starting connection 'Wired connection 1' (d2f7cd60-7192-331c-9fdd-34ee6dbab928)
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9217] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9219] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9222] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 15:31:18 np0005595918.novalocal NetworkManager[859]: <info>  [1769441478.9226] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:31:19 np0005595918.novalocal python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-adb4-b4b4-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:31:22 np0005595918.novalocal sshd-session[6983]: Connection closed by 178.128.250.55 port 52704
Jan 26 15:31:26 np0005595918.novalocal sudo[7059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thhfllryoldalxfdasxafaqelzjzpcwy ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 15:31:26 np0005595918.novalocal sudo[7059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:31:26 np0005595918.novalocal python3[7061]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:31:26 np0005595918.novalocal sudo[7059]: pam_unix(sudo:session): session closed for user root
Jan 26 15:31:27 np0005595918.novalocal sudo[7132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdkpkmvfqongpfuqgshqlqhthfjcocj ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 15:31:27 np0005595918.novalocal sudo[7132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:31:27 np0005595918.novalocal python3[7134]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769441486.4890113-102-6933198733637/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=fc788e8681857f1a0ca8992c298540cb9299b87c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:31:27 np0005595918.novalocal sudo[7132]: pam_unix(sudo:session): session closed for user root
Jan 26 15:31:27 np0005595918.novalocal sudo[7182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytoexeexosqdaxfghyzhjllwagpzyqs ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 15:31:27 np0005595918.novalocal sudo[7182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:31:27 np0005595918.novalocal python3[7184]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Stopping Network Manager...
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0412] caught SIGTERM, shutting down normally.
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0430] dhcp4 (eth0): canceled DHCP transaction
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0431] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0431] dhcp4 (eth0): state changed no lease
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0437] manager: NetworkManager state is now CONNECTING
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0541] dhcp4 (eth1): canceled DHCP transaction
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0541] dhcp4 (eth1): state changed no lease
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[859]: <info>  [1769441488.0617] exiting (success)
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Stopped Network Manager.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: NetworkManager.service: Consumed 1.097s CPU time, 10.0M memory peak.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Starting Network Manager...
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.1041] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:5124ce38-efa8-40f4-a4ab-032935f2d131)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.1044] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.1103] manager[0x559d76469000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Starting Hostname Service...
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Started Hostname Service.
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2326] hostname: hostname: using hostnamed
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2329] hostname: static hostname changed from (none) to "np0005595918.novalocal"
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2333] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2339] manager[0x559d76469000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2339] manager[0x559d76469000]: rfkill: WWAN hardware radio set enabled
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2362] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2362] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2363] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2363] manager: Networking is enabled by state file
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2365] settings: Loaded settings plugin: keyfile (internal)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2368] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2391] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2399] dhcp: init: Using DHCP client 'internal'
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2401] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2404] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2408] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2413] device (lo): Activation: starting connection 'lo' (8f11ff48-691a-496d-8a19-1570898b30be)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2418] device (eth0): carrier: link connected
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2421] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2424] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2424] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2428] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2432] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2436] device (eth1): carrier: link connected
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2439] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2442] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d2f7cd60-7192-331c-9fdd-34ee6dbab928) (indicated)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2443] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2445] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2451] device (eth1): Activation: starting connection 'Wired connection 1' (d2f7cd60-7192-331c-9fdd-34ee6dbab928)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2459] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Started Network Manager.
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2464] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2465] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2466] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2468] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2470] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2471] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2473] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2474] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2478] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2481] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2487] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2488] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2522] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2523] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2527] device (lo): Activation: successful, device activated.
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2543] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2547] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2627] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2649] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2650] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2653] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2656] device (eth0): Activation: successful, device activated.
Jan 26 15:31:28 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441488.2662] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 15:31:28 np0005595918.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 26 15:31:28 np0005595918.novalocal sudo[7182]: pam_unix(sudo:session): session closed for user root
Jan 26 15:31:28 np0005595918.novalocal python3[7268]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-adb4-b4b4-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:31:38 np0005595918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 15:31:54 np0005595918.novalocal systemd[4306]: Starting Mark boot as successful...
Jan 26 15:31:54 np0005595918.novalocal systemd[4306]: Finished Mark boot as successful.
Jan 26 15:31:58 np0005595918.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.1797] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 15:32:13 np0005595918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 15:32:13 np0005595918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2126] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2128] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2135] device (eth1): Activation: successful, device activated.
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2140] manager: startup complete
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2142] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <warn>  [1769441533.2146] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2153] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2220] dhcp4 (eth1): canceled DHCP transaction
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2221] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2221] dhcp4 (eth1): state changed no lease
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2235] policy: auto-activating connection 'ci-private-network' (029c721f-b037-502e-8185-a257ece4e436)
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2239] device (eth1): Activation: starting connection 'ci-private-network' (029c721f-b037-502e-8185-a257ece4e436)
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2240] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2242] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2248] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2257] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2299] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2301] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 15:32:13 np0005595918.novalocal NetworkManager[7193]: <info>  [1769441533.2305] device (eth1): Activation: successful, device activated.
Jan 26 15:32:23 np0005595918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 15:32:28 np0005595918.novalocal sshd-session[4315]: Received disconnect from 38.102.83.114 port 52422:11: disconnected by user
Jan 26 15:32:28 np0005595918.novalocal sshd-session[4315]: Disconnected from user zuul 38.102.83.114 port 52422
Jan 26 15:32:28 np0005595918.novalocal sshd-session[4302]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:32:28 np0005595918.novalocal systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Jan 26 15:32:28 np0005595918.novalocal sshd-session[7297]: Accepted publickey for zuul from 38.102.83.114 port 51272 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 15:32:28 np0005595918.novalocal systemd-logind[788]: New session 3 of user zuul.
Jan 26 15:32:28 np0005595918.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 26 15:32:28 np0005595918.novalocal sshd-session[7297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:32:28 np0005595918.novalocal sudo[7376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virckdlihupujqqecxsqgkykzpdallls ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 15:32:28 np0005595918.novalocal sudo[7376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:29 np0005595918.novalocal python3[7378]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:32:29 np0005595918.novalocal sudo[7376]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:29 np0005595918.novalocal sudo[7449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psbuozjdlqygpkxlxgweiyuuucalhgqz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 15:32:29 np0005595918.novalocal sudo[7449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:29 np0005595918.novalocal python3[7451]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441548.7665315-259-210089631580652/source _original_basename=tmpiiarf4u7 follow=False checksum=d7288a8abb0fa1f3198577bd4de074a20a14898c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:29 np0005595918.novalocal sudo[7449]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:31 np0005595918.novalocal sshd-session[7300]: Connection closed by 38.102.83.114 port 51272
Jan 26 15:32:31 np0005595918.novalocal sshd-session[7297]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:32:31 np0005595918.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 26 15:32:31 np0005595918.novalocal systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Jan 26 15:32:31 np0005595918.novalocal systemd-logind[788]: Removed session 3.
Jan 26 15:34:54 np0005595918.novalocal systemd[4306]: Created slice User Background Tasks Slice.
Jan 26 15:34:54 np0005595918.novalocal systemd[4306]: Starting Cleanup of User's Temporary Files and Directories...
Jan 26 15:34:54 np0005595918.novalocal systemd[4306]: Finished Cleanup of User's Temporary Files and Directories.
Jan 26 15:36:39 np0005595918.novalocal sshd-session[7482]: Connection closed by authenticating user root 178.128.250.55 port 36308 [preauth]
Jan 26 15:37:32 np0005595918.novalocal sshd-session[7484]: Connection closed by authenticating user root 178.128.250.55 port 39882 [preauth]
Jan 26 15:38:23 np0005595918.novalocal sshd-session[7486]: Connection closed by authenticating user root 178.128.250.55 port 59692 [preauth]
Jan 26 15:39:15 np0005595918.novalocal sshd-session[7488]: Connection closed by authenticating user root 178.128.250.55 port 60480 [preauth]
Jan 26 15:40:05 np0005595918.novalocal sshd-session[7490]: Connection closed by authenticating user root 178.128.250.55 port 59326 [preauth]
Jan 26 15:40:54 np0005595918.novalocal sshd-session[7494]: Accepted publickey for zuul from 38.102.83.114 port 35272 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 15:40:54 np0005595918.novalocal systemd-logind[788]: New session 4 of user zuul.
Jan 26 15:40:54 np0005595918.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 26 15:40:54 np0005595918.novalocal sshd-session[7494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:40:54 np0005595918.novalocal sudo[7521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvjltqbeqcywjykcshbkcxtryzxcftq ; /usr/bin/python3'
Jan 26 15:40:54 np0005595918.novalocal sudo[7521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:54 np0005595918.novalocal python3[7523]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-320c-360c-000000002189-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:40:54 np0005595918.novalocal sudo[7521]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:54 np0005595918.novalocal sudo[7550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjmlizknlxwnnxhveyzlldhllgydcfb ; /usr/bin/python3'
Jan 26 15:40:54 np0005595918.novalocal sudo[7550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:54 np0005595918.novalocal python3[7552]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:54 np0005595918.novalocal sudo[7550]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:55 np0005595918.novalocal sudo[7576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvjbpucvitktfushdirtbrifpucvyxy ; /usr/bin/python3'
Jan 26 15:40:55 np0005595918.novalocal sudo[7576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:55 np0005595918.novalocal python3[7578]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:55 np0005595918.novalocal sudo[7576]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:55 np0005595918.novalocal sudo[7602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtxgpasegcrqbvbvtxravgxoifoisaw ; /usr/bin/python3'
Jan 26 15:40:55 np0005595918.novalocal sudo[7602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:55 np0005595918.novalocal python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:55 np0005595918.novalocal sudo[7602]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:55 np0005595918.novalocal sudo[7628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyuybzklptsnaqzldexosageorvjkovq ; /usr/bin/python3'
Jan 26 15:40:55 np0005595918.novalocal sudo[7628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:55 np0005595918.novalocal python3[7630]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:55 np0005595918.novalocal sudo[7628]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:56 np0005595918.novalocal sudo[7656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exieigeopmxxsrtiqpuwkbjpsehnkvra ; /usr/bin/python3'
Jan 26 15:40:56 np0005595918.novalocal sudo[7656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:56 np0005595918.novalocal python3[7658]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:56 np0005595918.novalocal sshd-session[7631]: Connection closed by authenticating user root 178.128.250.55 port 53988 [preauth]
Jan 26 15:40:56 np0005595918.novalocal sudo[7656]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:56 np0005595918.novalocal sudo[7734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhgcvaefmkdncjouzagihclfiizdlkmv ; /usr/bin/python3'
Jan 26 15:40:56 np0005595918.novalocal sudo[7734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:56 np0005595918.novalocal python3[7736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:40:56 np0005595918.novalocal sudo[7734]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:56 np0005595918.novalocal sudo[7807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrvmffkshhbbuvsjvjmkwavtljluuol ; /usr/bin/python3'
Jan 26 15:40:56 np0005595918.novalocal sudo[7807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:57 np0005595918.novalocal python3[7809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769442056.4831572-524-168174796099945/source _original_basename=tmp5c_dg5pu follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:40:57 np0005595918.novalocal sudo[7807]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:57 np0005595918.novalocal sudo[7857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyapmxjjoanovmhljesaqmqbscqgyuf ; /usr/bin/python3'
Jan 26 15:40:57 np0005595918.novalocal sudo[7857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:58 np0005595918.novalocal python3[7859]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 15:40:58 np0005595918.novalocal systemd[1]: Reloading.
Jan 26 15:40:58 np0005595918.novalocal systemd-rc-local-generator[7878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:40:58 np0005595918.novalocal sudo[7857]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:59 np0005595918.novalocal sudo[7913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqkdvaexkycfigtzdlmlomzfxjrseuv ; /usr/bin/python3'
Jan 26 15:40:59 np0005595918.novalocal sudo[7913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:40:59 np0005595918.novalocal python3[7915]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 26 15:40:59 np0005595918.novalocal sudo[7913]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:00 np0005595918.novalocal sudo[7939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okurykybbytmwmhzigflfpjivljrbghp ; /usr/bin/python3'
Jan 26 15:41:00 np0005595918.novalocal sudo[7939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:41:00 np0005595918.novalocal python3[7941]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:41:00 np0005595918.novalocal sudo[7939]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:00 np0005595918.novalocal sudo[7967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riupcvnagrqufxaplhuetxkuhtvmaium ; /usr/bin/python3'
Jan 26 15:41:00 np0005595918.novalocal sudo[7967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:41:00 np0005595918.novalocal python3[7969]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:41:00 np0005595918.novalocal sudo[7967]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:00 np0005595918.novalocal sudo[7995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkuwnpkbotzrjlxcfhjmfyfxgzsyfet ; /usr/bin/python3'
Jan 26 15:41:00 np0005595918.novalocal sudo[7995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:41:00 np0005595918.novalocal python3[7997]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:41:00 np0005595918.novalocal sudo[7995]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:00 np0005595918.novalocal sudo[8023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekjbwoprbhkshwmqgzcausnyijudhtk ; /usr/bin/python3'
Jan 26 15:41:00 np0005595918.novalocal sudo[8023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:41:00 np0005595918.novalocal python3[8025]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:41:01 np0005595918.novalocal sudo[8023]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:01 np0005595918.novalocal python3[8052]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-320c-360c-000000002190-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:41:02 np0005595918.novalocal python3[8082]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 15:41:04 np0005595918.novalocal sshd-session[7497]: Connection closed by 38.102.83.114 port 35272
Jan 26 15:41:04 np0005595918.novalocal sshd-session[7494]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:41:04 np0005595918.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 26 15:41:04 np0005595918.novalocal systemd[1]: session-4.scope: Consumed 4.188s CPU time.
Jan 26 15:41:04 np0005595918.novalocal systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Jan 26 15:41:04 np0005595918.novalocal systemd-logind[788]: Removed session 4.
Jan 26 15:41:05 np0005595918.novalocal sshd-session[8086]: Accepted publickey for zuul from 38.102.83.114 port 37496 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 15:41:05 np0005595918.novalocal systemd-logind[788]: New session 5 of user zuul.
Jan 26 15:41:05 np0005595918.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 26 15:41:05 np0005595918.novalocal sshd-session[8086]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:41:05 np0005595918.novalocal sudo[8113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtcerrwsfrdovqqpjblnelnoumjvwgp ; /usr/bin/python3'
Jan 26 15:41:05 np0005595918.novalocal sudo[8113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:41:06 np0005595918.novalocal python3[8115]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 15:41:15 np0005595918.novalocal setsebool[8153]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 26 15:41:15 np0005595918.novalocal setsebool[8153]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:41:31 np0005595918.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:41:42 np0005595918.novalocal sshd-session[8180]: Connection closed by authenticating user root 178.128.250.55 port 43518 [preauth]
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:41:46 np0005595918.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:42:05 np0005595918.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 15:42:05 np0005595918.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 15:42:05 np0005595918.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 26 15:42:05 np0005595918.novalocal systemd[1]: Reloading.
Jan 26 15:42:05 np0005595918.novalocal systemd-rc-local-generator[8924]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:42:05 np0005595918.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 15:42:07 np0005595918.novalocal sudo[8113]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:07 np0005595918.novalocal python3[10280]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-a247-5214-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:42:08 np0005595918.novalocal kernel: evm: overlay not supported
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: Starting D-Bus User Message Bus...
Jan 26 15:42:08 np0005595918.novalocal dbus-broker-launch[11397]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 26 15:42:08 np0005595918.novalocal dbus-broker-launch[11397]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: Started D-Bus User Message Bus.
Jan 26 15:42:08 np0005595918.novalocal dbus-broker-lau[11397]: Ready
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: Created slice Slice /user.
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: podman-11240.scope: unit configures an IP firewall, but not running as root.
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: (This warning is only shown for the first unit using IP firewalling.)
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: Started podman-11240.scope.
Jan 26 15:42:08 np0005595918.novalocal systemd[4306]: Started podman-pause-2150a6c6.scope.
Jan 26 15:42:09 np0005595918.novalocal sudo[12023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqukumzacitegziibqdkeotnfkionwyr ; /usr/bin/python3'
Jan 26 15:42:09 np0005595918.novalocal sudo[12023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:09 np0005595918.novalocal python3[12041]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.47:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.47:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:42:09 np0005595918.novalocal python3[12041]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 26 15:42:09 np0005595918.novalocal sudo[12023]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:10 np0005595918.novalocal sshd-session[8089]: Connection closed by 38.102.83.114 port 37496
Jan 26 15:42:10 np0005595918.novalocal sshd-session[8086]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:42:10 np0005595918.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 26 15:42:10 np0005595918.novalocal systemd[1]: session-5.scope: Consumed 56.553s CPU time.
Jan 26 15:42:10 np0005595918.novalocal systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Jan 26 15:42:10 np0005595918.novalocal systemd-logind[788]: Removed session 5.
Jan 26 15:42:27 np0005595918.novalocal sshd-session[18935]: Connection closed by authenticating user root 178.128.250.55 port 39310 [preauth]
Jan 26 15:42:31 np0005595918.novalocal sshd-session[20728]: Unable to negotiate with 38.102.83.145 port 48724: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 26 15:42:31 np0005595918.novalocal sshd-session[20731]: Unable to negotiate with 38.102.83.145 port 48738: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 26 15:42:31 np0005595918.novalocal sshd-session[20733]: Connection closed by 38.102.83.145 port 48700 [preauth]
Jan 26 15:42:31 np0005595918.novalocal sshd-session[20734]: Unable to negotiate with 38.102.83.145 port 48720: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 26 15:42:31 np0005595918.novalocal sshd-session[20736]: Connection closed by 38.102.83.145 port 48706 [preauth]
Jan 26 15:42:36 np0005595918.novalocal sshd-session[22608]: Accepted publickey for zuul from 38.102.83.114 port 51614 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 15:42:36 np0005595918.novalocal systemd-logind[788]: New session 6 of user zuul.
Jan 26 15:42:37 np0005595918.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 26 15:42:37 np0005595918.novalocal sshd-session[22608]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:42:38 np0005595918.novalocal python3[22679]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRezSS5F+hHDPcB7kLJQTMuWIPz7DsbXeBuE4QBS5w5rB/sur3aLtBTILJUOi09xc3xJS0lT29G8TXYSCXnIIE= zuul@np0005595917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:42:38 np0005595918.novalocal sudo[22878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvzhededmnaxteaoitnioihmdrzpsnx ; /usr/bin/python3'
Jan 26 15:42:38 np0005595918.novalocal sudo[22878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:38 np0005595918.novalocal python3[22891]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRezSS5F+hHDPcB7kLJQTMuWIPz7DsbXeBuE4QBS5w5rB/sur3aLtBTILJUOi09xc3xJS0lT29G8TXYSCXnIIE= zuul@np0005595917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:42:38 np0005595918.novalocal sudo[22878]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:39 np0005595918.novalocal sudo[23288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsmlxptsoqpwhaytmmhgxxeyabdgjqw ; /usr/bin/python3'
Jan 26 15:42:39 np0005595918.novalocal sudo[23288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:39 np0005595918.novalocal python3[23298]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005595918.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 26 15:42:39 np0005595918.novalocal useradd[23367]: new group: name=cloud-admin, GID=1002
Jan 26 15:42:39 np0005595918.novalocal useradd[23367]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 26 15:42:39 np0005595918.novalocal sudo[23288]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:39 np0005595918.novalocal sudo[23485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsqvynslobmyvjmhofqwvvmdwxhpvnmk ; /usr/bin/python3'
Jan 26 15:42:39 np0005595918.novalocal sudo[23485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:40 np0005595918.novalocal python3[23494]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRezSS5F+hHDPcB7kLJQTMuWIPz7DsbXeBuE4QBS5w5rB/sur3aLtBTILJUOi09xc3xJS0lT29G8TXYSCXnIIE= zuul@np0005595917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 15:42:40 np0005595918.novalocal sudo[23485]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:40 np0005595918.novalocal sudo[23763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgcllshiiiybhawoibnqusoepozknzde ; /usr/bin/python3'
Jan 26 15:42:40 np0005595918.novalocal sudo[23763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:40 np0005595918.novalocal python3[23772]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:42:40 np0005595918.novalocal sudo[23763]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:40 np0005595918.novalocal sudo[24019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhcwxnpweyftzrddruezapdtnpfvlpyz ; /usr/bin/python3'
Jan 26 15:42:40 np0005595918.novalocal sudo[24019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:41 np0005595918.novalocal python3[24030]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769442160.2981806-135-203878459934453/source _original_basename=tmp5j_9xiuw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:42:41 np0005595918.novalocal sudo[24019]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:41 np0005595918.novalocal sudo[24320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyvgilpnmhbtbiqudgxirvfgxkpptcqt ; /usr/bin/python3'
Jan 26 15:42:41 np0005595918.novalocal sudo[24320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:42:42 np0005595918.novalocal python3[24331]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 26 15:42:42 np0005595918.novalocal systemd[1]: Starting Hostname Service...
Jan 26 15:42:42 np0005595918.novalocal systemd[1]: Started Hostname Service.
Jan 26 15:42:42 np0005595918.novalocal systemd-hostnamed[24453]: Changed pretty hostname to 'compute-0'
Jan 26 15:42:42 compute-0 systemd-hostnamed[24453]: Hostname set to <compute-0> (static)
Jan 26 15:42:42 compute-0 NetworkManager[7193]: <info>  [1769442162.2084] hostname: static hostname changed from "np0005595918.novalocal" to "compute-0"
Jan 26 15:42:42 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 15:42:42 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 15:42:42 compute-0 sudo[24320]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:42 compute-0 sshd-session[22620]: Connection closed by 38.102.83.114 port 51614
Jan 26 15:42:42 compute-0 sshd-session[22608]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:42:42 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 26 15:42:42 compute-0 systemd[1]: session-6.scope: Consumed 2.625s CPU time.
Jan 26 15:42:42 compute-0 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Jan 26 15:42:42 compute-0 systemd-logind[788]: Removed session 6.
Jan 26 15:42:52 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 15:42:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 15:42:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 15:42:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 890ms CPU time.
Jan 26 15:42:57 compute-0 systemd[1]: run-r5bd6818fcf474a01acf5f286e21bcf59.service: Deactivated successfully.
Jan 26 15:43:10 compute-0 sshd-session[29931]: Connection closed by authenticating user root 178.128.250.55 port 57850 [preauth]
Jan 26 15:43:12 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 15:43:56 compute-0 sshd-session[29935]: Connection closed by authenticating user root 178.128.250.55 port 56790 [preauth]
Jan 26 15:44:40 compute-0 sshd-session[29937]: Connection closed by authenticating user root 178.128.250.55 port 36788 [preauth]
Jan 26 15:44:40 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 26 15:44:40 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 26 15:44:40 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 26 15:44:40 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 26 15:44:43 compute-0 sshd-session[29944]: Received disconnect from 103.42.57.158 port 56654:11:  [preauth]
Jan 26 15:44:43 compute-0 sshd-session[29944]: Disconnected from authenticating user root 103.42.57.158 port 56654 [preauth]
Jan 26 15:45:24 compute-0 sshd-session[29948]: Connection closed by authenticating user root 178.128.250.55 port 39950 [preauth]
Jan 26 15:45:36 compute-0 sshd-session[29950]: banner exchange: Connection from 3.137.73.221 port 43266: invalid format
Jan 26 15:45:48 compute-0 sshd-session[29952]: Connection closed by 3.137.73.221 port 44668
Jan 26 15:46:17 compute-0 sshd-session[29953]: Connection closed by 178.128.250.55 port 57190
Jan 26 15:46:18 compute-0 sshd-session[29954]: Connection closed by authenticating user root 178.128.250.55 port 57206 [preauth]
Jan 26 15:48:01 compute-0 sshd-session[29957]: Connection closed by 3.137.73.221 port 47088
Jan 26 15:48:30 compute-0 sshd-session[29958]: Accepted publickey for zuul from 38.102.83.145 port 40262 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 15:48:30 compute-0 systemd-logind[788]: New session 7 of user zuul.
Jan 26 15:48:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 26 15:48:30 compute-0 sshd-session[29958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:48:31 compute-0 python3[30034]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:48:32 compute-0 sudo[30148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgtntrzbblekmuumrbhrwbqjmblhivu ; /usr/bin/python3'
Jan 26 15:48:32 compute-0 sudo[30148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:32 compute-0 python3[30150]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:32 compute-0 sudo[30148]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:33 compute-0 sudo[30221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnuljlwkpxzvlplknneobmytuictsus ; /usr/bin/python3'
Jan 26 15:48:33 compute-0 sudo[30221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:33 compute-0 python3[30223]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:33 compute-0 sudo[30221]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:33 compute-0 sudo[30247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idulgtfwuiruzejcvqlwrywebutsrxwr ; /usr/bin/python3'
Jan 26 15:48:33 compute-0 sudo[30247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:33 compute-0 python3[30249]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:33 compute-0 sudo[30247]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:33 compute-0 sudo[30320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcchtksooagpumploijvfchrqimkkbk ; /usr/bin/python3'
Jan 26 15:48:33 compute-0 sudo[30320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:33 compute-0 python3[30322]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:33 compute-0 sudo[30320]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:33 compute-0 sudo[30346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxusyuvfriwuyrncjsdvcimoeliwzug ; /usr/bin/python3'
Jan 26 15:48:33 compute-0 sudo[30346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:34 compute-0 python3[30348]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:34 compute-0 sudo[30346]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:34 compute-0 sudo[30419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidjqdnsgamgxwlpnaunfvkpetunylir ; /usr/bin/python3'
Jan 26 15:48:34 compute-0 sudo[30419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:34 compute-0 python3[30421]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:34 compute-0 sudo[30419]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:34 compute-0 sudo[30445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvsxwglevddnggebbihbmgbrepsjzwf ; /usr/bin/python3'
Jan 26 15:48:34 compute-0 sudo[30445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:34 compute-0 python3[30447]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:34 compute-0 sudo[30445]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:35 compute-0 sudo[30518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvgqqtookiyneoyafmdadwszvputmtdu ; /usr/bin/python3'
Jan 26 15:48:35 compute-0 sudo[30518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:35 compute-0 python3[30520]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:35 compute-0 sudo[30518]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:35 compute-0 sudo[30544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tabpmtlbpdmbjujepvrzjwjcxczrqncz ; /usr/bin/python3'
Jan 26 15:48:35 compute-0 sudo[30544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:35 compute-0 python3[30546]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:35 compute-0 sudo[30544]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:35 compute-0 sudo[30617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlemunzdflbzmxekvhhiillhwuxotbw ; /usr/bin/python3'
Jan 26 15:48:35 compute-0 sudo[30617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:35 compute-0 python3[30619]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:35 compute-0 sudo[30617]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:35 compute-0 sudo[30643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakhaunmddwvjovsehdgpdcwuqculpxy ; /usr/bin/python3'
Jan 26 15:48:35 compute-0 sudo[30643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:36 compute-0 python3[30645]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:36 compute-0 sudo[30643]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:36 compute-0 sudo[30716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzxbgsilfbfbwmxwqrnrkwahyshywkz ; /usr/bin/python3'
Jan 26 15:48:36 compute-0 sudo[30716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:36 compute-0 python3[30718]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:36 compute-0 sudo[30716]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:36 compute-0 sudo[30742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdjgszgeoezqcxwziadfgnbawohbrqw ; /usr/bin/python3'
Jan 26 15:48:36 compute-0 sudo[30742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:36 compute-0 python3[30744]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 15:48:36 compute-0 sudo[30742]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:36 compute-0 sudo[30815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvovtinwukhwxiaqfapisuevitunqfj ; /usr/bin/python3'
Jan 26 15:48:36 compute-0 sudo[30815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:48:37 compute-0 python3[30817]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769442512.4142387-33661-1657648149927/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:48:37 compute-0 sudo[30815]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:39 compute-0 sshd-session[30842]: Connection closed by 192.168.122.11 port 55492 [preauth]
Jan 26 15:48:39 compute-0 sshd-session[30843]: Connection closed by 192.168.122.11 port 55494 [preauth]
Jan 26 15:48:39 compute-0 sshd-session[30844]: Unable to negotiate with 192.168.122.11 port 55500: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 26 15:48:39 compute-0 sshd-session[30845]: Unable to negotiate with 192.168.122.11 port 55504: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 26 15:48:39 compute-0 sshd-session[30846]: Unable to negotiate with 192.168.122.11 port 55510: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 26 15:49:19 compute-0 sshd-session[30852]: banner exchange: Connection from 3.137.73.221 port 59084: invalid format
Jan 26 15:51:25 compute-0 sshd-session[30855]: Connection closed by 3.137.73.221 port 36838 [preauth]
Jan 26 15:51:53 compute-0 python3[30880]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:51:53 compute-0 systemd[1]: Starting dnf makecache...
Jan 26 15:51:53 compute-0 dnf[30882]: Failed determining last makecache time.
Jan 26 15:51:53 compute-0 dnf[30882]: delorean-openstack-barbican-42b4c41831408a8e323 213 kB/s |  13 kB     00:00
Jan 26 15:51:53 compute-0 dnf[30882]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.5 MB/s |  65 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.2 MB/s |  32 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-python-stevedore-c4acc5639fd2329372142 2.3 MB/s | 131 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.3 MB/s |  32 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-os-refresh-config-9bfc52b5049be2d8de61 1.2 MB/s | 349 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 220 kB/s |  42 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-python-designate-tests-tempest-347fdbc 222 kB/s |  18 kB     00:00
Jan 26 15:51:54 compute-0 dnf[30882]: delorean-openstack-glance-1fd12c29b339f30fe823e 122 kB/s |  18 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.1 MB/s |  29 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-manila-3c01b7181572c95dac462 955 kB/s |  25 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-python-whitebox-neutron-tests-tempest- 5.6 MB/s | 154 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-octavia-ba397f07a7331190208c 930 kB/s |  26 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-watcher-c014f81a8647287f6dcc 651 kB/s |  16 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-ansible-config_template-5ccaa22121a7ff 276 kB/s | 7.4 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 4.9 MB/s | 144 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-swift-dc98a8463506ac520c469a 575 kB/s |  14 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-python-tempestconf-8515371b7cceebd4282 112 kB/s |  53 kB     00:00
Jan 26 15:51:55 compute-0 dnf[30882]: delorean-openstack-heat-ui-013accbfd179753bc3f0 1.1 MB/s |  96 kB     00:00
Jan 26 15:51:56 compute-0 dnf[30882]: CentOS Stream 9 - BaseOS                         58 kB/s | 6.7 kB     00:00
Jan 26 15:51:56 compute-0 dnf[30882]: CentOS Stream 9 - AppStream                      59 kB/s | 6.8 kB     00:00
Jan 26 15:51:56 compute-0 dnf[30882]: CentOS Stream 9 - CRB                            58 kB/s | 6.6 kB     00:00
Jan 26 15:51:56 compute-0 dnf[30882]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 26 15:51:56 compute-0 dnf[30882]: dlrn-antelope-testing                            24 MB/s | 1.1 MB     00:00
Jan 26 15:51:57 compute-0 dnf[30882]: dlrn-antelope-build-deps                        1.6 MB/s | 461 kB     00:00
Jan 26 15:51:57 compute-0 dnf[30882]: centos9-rabbitmq                                703 kB/s | 123 kB     00:00
Jan 26 15:51:57 compute-0 dnf[30882]: centos9-storage                                  22 MB/s | 415 kB     00:00
Jan 26 15:51:58 compute-0 dnf[30882]: centos9-opstools                                479 kB/s |  51 kB     00:00
Jan 26 15:51:58 compute-0 dnf[30882]: NFV SIG OpenvSwitch                             7.2 MB/s | 461 kB     00:00
Jan 26 15:51:59 compute-0 dnf[30882]: repo-setup-centos-appstream                      42 MB/s |  26 MB     00:00
Jan 26 15:52:08 compute-0 dnf[30882]: repo-setup-centos-baseos                         69 MB/s | 8.9 MB     00:00
Jan 26 15:52:10 compute-0 dnf[30882]: repo-setup-centos-highavailability               34 MB/s | 744 kB     00:00
Jan 26 15:52:10 compute-0 dnf[30882]: repo-setup-centos-powertools                     65 MB/s | 7.6 MB     00:00
Jan 26 15:52:13 compute-0 dnf[30882]: Extra Packages for Enterprise Linux 9 - x86_64   37 MB/s |  20 MB     00:00
Jan 26 15:52:19 compute-0 sshd-session[30983]: banner exchange: Connection from 3.137.73.221 port 53078: invalid format
Jan 26 15:52:32 compute-0 dnf[30882]: Metadata cache created.
Jan 26 15:52:32 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 26 15:52:32 compute-0 systemd[1]: Finished dnf makecache.
Jan 26 15:52:32 compute-0 systemd[1]: dnf-makecache.service: Consumed 35.309s CPU time.
Jan 26 15:56:53 compute-0 sshd-session[29961]: Received disconnect from 38.102.83.145 port 40262:11: disconnected by user
Jan 26 15:56:53 compute-0 sshd-session[29961]: Disconnected from user zuul 38.102.83.145 port 40262
Jan 26 15:56:53 compute-0 sshd-session[29958]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:56:53 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 26 15:56:53 compute-0 systemd[1]: session-7.scope: Consumed 5.225s CPU time.
Jan 26 15:56:53 compute-0 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Jan 26 15:56:53 compute-0 systemd-logind[788]: Removed session 7.
Jan 26 15:59:26 compute-0 sshd-session[30987]: Connection closed by 45.249.247.124 port 34210
Jan 26 15:59:29 compute-0 sshd-session[30988]: Connection reset by authenticating user root 176.120.22.13 port 31906 [preauth]
Jan 26 15:59:31 compute-0 sshd-session[30990]: Invalid user admin from 176.120.22.13 port 31918
Jan 26 15:59:32 compute-0 sshd-session[30990]: Connection reset by invalid user admin 176.120.22.13 port 31918 [preauth]
Jan 26 15:59:35 compute-0 sshd-session[30992]: Connection reset by authenticating user root 176.120.22.13 port 35912 [preauth]
Jan 26 15:59:38 compute-0 sshd-session[30994]: Connection reset by authenticating user root 176.120.22.13 port 35930 [preauth]
Jan 26 15:59:39 compute-0 sshd-session[30996]: Invalid user  from 176.120.22.13 port 35938
Jan 26 15:59:40 compute-0 sshd-session[30996]: Connection reset by invalid user  176.120.22.13 port 35938 [preauth]
Jan 26 16:01:01 compute-0 CROND[31000]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 16:01:01 compute-0 run-parts[31003]: (/etc/cron.hourly) starting 0anacron
Jan 26 16:01:01 compute-0 anacron[31011]: Anacron started on 2026-01-26
Jan 26 16:01:01 compute-0 anacron[31011]: Will run job `cron.daily' in 23 min.
Jan 26 16:01:01 compute-0 anacron[31011]: Will run job `cron.weekly' in 43 min.
Jan 26 16:01:01 compute-0 anacron[31011]: Will run job `cron.monthly' in 63 min.
Jan 26 16:01:01 compute-0 anacron[31011]: Jobs will be executed sequentially
Jan 26 16:01:01 compute-0 run-parts[31013]: (/etc/cron.hourly) finished 0anacron
Jan 26 16:01:01 compute-0 CROND[30999]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 16:05:06 compute-0 sshd-session[31016]: Invalid user AdminGPON from 45.148.10.121 port 38280
Jan 26 16:05:06 compute-0 sshd-session[31016]: Connection closed by invalid user AdminGPON 45.148.10.121 port 38280 [preauth]
Jan 26 16:07:02 compute-0 sshd-session[31019]: Accepted publickey for zuul from 192.168.122.30 port 54996 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:07:02 compute-0 systemd-logind[788]: New session 8 of user zuul.
Jan 26 16:07:02 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 26 16:07:02 compute-0 sshd-session[31019]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:07:03 compute-0 python3.9[31172]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:04 compute-0 sudo[31351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndpsisiepjpodhrwppowshfuywrqblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443624.1580508-27-231561060327701/AnsiballZ_command.py'
Jan 26 16:07:04 compute-0 sudo[31351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:04 compute-0 python3.9[31353]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:07:16 compute-0 sudo[31351]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:16 compute-0 sshd-session[31022]: Connection closed by 192.168.122.30 port 54996
Jan 26 16:07:16 compute-0 sshd-session[31019]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:07:16 compute-0 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Jan 26 16:07:16 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 26 16:07:16 compute-0 systemd[1]: session-8.scope: Consumed 8.880s CPU time.
Jan 26 16:07:16 compute-0 systemd-logind[788]: Removed session 8.
Jan 26 16:07:23 compute-0 sshd-session[31411]: Accepted publickey for zuul from 192.168.122.30 port 50482 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:07:23 compute-0 systemd-logind[788]: New session 9 of user zuul.
Jan 26 16:07:23 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 26 16:07:23 compute-0 sshd-session[31411]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:07:24 compute-0 python3.9[31564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:25 compute-0 sshd-session[31414]: Connection closed by 192.168.122.30 port 50482
Jan 26 16:07:25 compute-0 sshd-session[31411]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:07:25 compute-0 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Jan 26 16:07:25 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 26 16:07:25 compute-0 systemd-logind[788]: Removed session 9.
Jan 26 16:07:42 compute-0 sshd-session[31592]: Accepted publickey for zuul from 192.168.122.30 port 32894 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:07:42 compute-0 systemd-logind[788]: New session 10 of user zuul.
Jan 26 16:07:42 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 26 16:07:42 compute-0 sshd-session[31592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:07:43 compute-0 python3.9[31745]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 26 16:07:44 compute-0 python3.9[31919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:45 compute-0 sudo[32069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoqqhyigyccqstncrtjvjgkjxaobbahk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443664.8142183-40-41118731525408/AnsiballZ_command.py'
Jan 26 16:07:45 compute-0 sudo[32069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:45 compute-0 python3.9[32071]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:07:45 compute-0 sudo[32069]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:46 compute-0 sudo[32222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyaajhcpgpgnydpllpyanjpzqbvuqxwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443665.730672-52-40837123545634/AnsiballZ_stat.py'
Jan 26 16:07:46 compute-0 sudo[32222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:46 compute-0 python3.9[32224]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:07:46 compute-0 sudo[32222]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:46 compute-0 sudo[32374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svpsyqxkaeuwsiknatvkekkdgyjxjkat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443666.5042388-60-139956138854635/AnsiballZ_file.py'
Jan 26 16:07:46 compute-0 sudo[32374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:47 compute-0 python3.9[32376]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:07:47 compute-0 sudo[32374]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:47 compute-0 sudo[32526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmahwlmxeybiuxkkhdzwmyztryjkbutv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443667.359522-68-235075182089710/AnsiballZ_stat.py'
Jan 26 16:07:47 compute-0 sudo[32526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:47 compute-0 python3.9[32528]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:07:47 compute-0 sudo[32526]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:48 compute-0 sudo[32649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjnmgusrcigtrspafarsuqymparibdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443667.359522-68-235075182089710/AnsiballZ_copy.py'
Jan 26 16:07:48 compute-0 sudo[32649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:48 compute-0 python3.9[32651]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769443667.359522-68-235075182089710/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:07:48 compute-0 sudo[32649]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:48 compute-0 sudo[32801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hggojydybnfkppjwcictdnfcducwwksm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443668.706507-83-224287364523245/AnsiballZ_setup.py'
Jan 26 16:07:48 compute-0 sudo[32801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:49 compute-0 python3.9[32803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:49 compute-0 sudo[32801]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:49 compute-0 sudo[32957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuumvahmmyomkrutsqqfhxvqbfophrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443669.6791046-91-63270349480167/AnsiballZ_file.py'
Jan 26 16:07:49 compute-0 sudo[32957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:50 compute-0 python3.9[32959]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:07:50 compute-0 sudo[32957]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:50 compute-0 sudo[33109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rloxsdcgsmzfcqiplivfhyiuvxguiwff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443670.374474-100-134787070997749/AnsiballZ_file.py'
Jan 26 16:07:50 compute-0 sudo[33109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:50 compute-0 python3.9[33111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:07:50 compute-0 sudo[33109]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:51 compute-0 python3.9[33261]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:07:55 compute-0 python3.9[33514]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:07:55 compute-0 python3.9[33664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:57 compute-0 python3.9[33818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:07:58 compute-0 sudo[33974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnquymoankbxkxxvttdkkguweyusksuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443677.5362475-148-17892005753130/AnsiballZ_setup.py'
Jan 26 16:07:58 compute-0 sudo[33974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:58 compute-0 python3.9[33976]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:07:58 compute-0 sudo[33974]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:59 compute-0 sudo[34058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzvycerhcngaqxycyojosshxybgikcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443677.5362475-148-17892005753130/AnsiballZ_dnf.py'
Jan 26 16:07:59 compute-0 sudo[34058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:07:59 compute-0 python3.9[34060]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:08:01 compute-0 irqbalance[787]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 26 16:08:01 compute-0 irqbalance[787]: IRQ 26 affinity is now unmanaged
Jan 26 16:08:48 compute-0 systemd[1]: Reloading.
Jan 26 16:08:48 compute-0 systemd-rc-local-generator[34261]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:08:48 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 26 16:08:48 compute-0 systemd[1]: Reloading.
Jan 26 16:08:49 compute-0 systemd-rc-local-generator[34302]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:08:49 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 26 16:08:49 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 26 16:08:49 compute-0 systemd[1]: Reloading.
Jan 26 16:08:49 compute-0 systemd-rc-local-generator[34338]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:08:49 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 26 16:08:49 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:08:49 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:08:49 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:10:05 compute-0 kernel: SELinux:  Converting 2725 SID table entries...
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:10:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:10:05 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 26 16:10:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:10:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:10:06 compute-0 systemd[1]: Reloading.
Jan 26 16:10:06 compute-0 systemd-rc-local-generator[34666]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:10:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:10:06 compute-0 sudo[34058]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:10:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:10:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.392s CPU time.
Jan 26 16:10:07 compute-0 systemd[1]: run-r6e55cfb240154f3091cef960b4bb7526.service: Deactivated successfully.
Jan 26 16:10:07 compute-0 sudo[35580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobvfjdanladqwxtgnvyllgczkrvfvzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443807.1152077-160-95053508367589/AnsiballZ_command.py'
Jan 26 16:10:07 compute-0 sudo[35580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:07 compute-0 python3.9[35582]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:08 compute-0 sudo[35580]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:09 compute-0 sudo[35861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnwombbmrvhvdsmiowwkrfkrfjegkew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443808.6759899-168-207972979542500/AnsiballZ_selinux.py'
Jan 26 16:10:09 compute-0 sudo[35861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:09 compute-0 python3.9[35863]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 26 16:10:09 compute-0 sudo[35861]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:10 compute-0 sudo[36013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojthgjsulazgnqmyykjcemlqfnjgmqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443810.0308602-179-201073512200592/AnsiballZ_command.py'
Jan 26 16:10:10 compute-0 sudo[36013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:10 compute-0 python3.9[36015]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 26 16:10:12 compute-0 sudo[36013]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:12 compute-0 sudo[36166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snviyhktwwqpulrdrrwktuychhxewcuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443812.35211-187-20357263731486/AnsiballZ_file.py'
Jan 26 16:10:12 compute-0 sudo[36166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:13 compute-0 python3.9[36168]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:10:13 compute-0 sudo[36166]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:14 compute-0 sudo[36318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kedltpnkwsuhmnjvxnsllwwvcggzmszm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443813.7786536-195-263315116785831/AnsiballZ_mount.py'
Jan 26 16:10:14 compute-0 sudo[36318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:14 compute-0 python3.9[36320]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 26 16:10:14 compute-0 sudo[36318]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:15 compute-0 sudo[36470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mysyowteyrnplrfpnterctguyvievfwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443815.4634755-223-279789521252421/AnsiballZ_file.py'
Jan 26 16:10:15 compute-0 sudo[36470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:15 compute-0 python3.9[36472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:10:15 compute-0 sudo[36470]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:16 compute-0 sudo[36622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouzdwrewygakwytsbsipjwnhmyvyljlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443816.0997748-231-267881826797836/AnsiballZ_stat.py'
Jan 26 16:10:16 compute-0 sudo[36622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:16 compute-0 python3.9[36624]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:10:16 compute-0 sudo[36622]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:16 compute-0 sudo[36745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbzujlistwashdfkytmqtkgwwjogdhpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443816.0997748-231-267881826797836/AnsiballZ_copy.py'
Jan 26 16:10:16 compute-0 sudo[36745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:19 compute-0 python3.9[36747]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769443816.0997748-231-267881826797836/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:10:19 compute-0 sudo[36745]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:20 compute-0 sudo[36897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxdargfmqhcucljtldinjwwfaxfonvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443820.1134238-255-52452242292920/AnsiballZ_stat.py'
Jan 26 16:10:20 compute-0 sudo[36897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:23 compute-0 python3.9[36899]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:10:23 compute-0 sudo[36897]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:23 compute-0 sudo[37049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwexjvoyobrrornhsbazalvlhliungpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443823.3176787-263-130796909556129/AnsiballZ_command.py'
Jan 26 16:10:23 compute-0 sudo[37049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:23 compute-0 python3.9[37051]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:23 compute-0 sudo[37049]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:24 compute-0 sudo[37202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngowztrsoocmvcbtntodlndxmtmcqdul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443824.0008292-271-258713529978998/AnsiballZ_file.py'
Jan 26 16:10:24 compute-0 sudo[37202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:24 compute-0 python3.9[37204]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:10:24 compute-0 sudo[37202]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:25 compute-0 sudo[37354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uythuxfowkuzqyulrjjztrglpntymoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443825.0355213-282-233987674127850/AnsiballZ_getent.py'
Jan 26 16:10:25 compute-0 sudo[37354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:25 compute-0 python3.9[37356]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 26 16:10:25 compute-0 sudo[37354]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:25 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:10:26 compute-0 sudo[37508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahriyvvrolqnmfkhofsvsbtdzsxnmuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443825.9652882-290-183135513628764/AnsiballZ_group.py'
Jan 26 16:10:26 compute-0 sudo[37508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:26 compute-0 python3.9[37510]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:10:26 compute-0 groupadd[37511]: group added to /etc/group: name=qemu, GID=107
Jan 26 16:10:26 compute-0 groupadd[37511]: group added to /etc/gshadow: name=qemu
Jan 26 16:10:26 compute-0 groupadd[37511]: new group: name=qemu, GID=107
Jan 26 16:10:26 compute-0 sudo[37508]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:27 compute-0 sudo[37666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngytjhyyqitdqihhdlkbuzanyulujazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443826.9932437-298-76126278998869/AnsiballZ_user.py'
Jan 26 16:10:27 compute-0 sudo[37666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:27 compute-0 python3.9[37668]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 16:10:27 compute-0 useradd[37670]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 16:10:27 compute-0 sudo[37666]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:28 compute-0 sudo[37826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrrjqcsbznwlqwqycildobjvfjlihtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443828.0664456-306-280072881878902/AnsiballZ_getent.py'
Jan 26 16:10:28 compute-0 sudo[37826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:28 compute-0 python3.9[37828]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 26 16:10:28 compute-0 sudo[37826]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:29 compute-0 sudo[37979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inkfczbpwrbmlbtmmhuqgtkwqtnvcxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443828.78514-314-76987382498606/AnsiballZ_group.py'
Jan 26 16:10:29 compute-0 sudo[37979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:29 compute-0 python3.9[37981]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:10:29 compute-0 groupadd[37982]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 26 16:10:29 compute-0 groupadd[37982]: group added to /etc/gshadow: name=hugetlbfs
Jan 26 16:10:29 compute-0 groupadd[37982]: new group: name=hugetlbfs, GID=42477
Jan 26 16:10:29 compute-0 sudo[37979]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:29 compute-0 sudo[38137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxvrbsbwiwqptlcedkrycjypextgtadh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443829.555279-323-189857598574346/AnsiballZ_file.py'
Jan 26 16:10:29 compute-0 sudo[38137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:30 compute-0 python3.9[38139]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 26 16:10:30 compute-0 sudo[38137]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:30 compute-0 sudo[38289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuhmmiesumltqezaaukzbcvcdqubtfdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443830.3650467-334-135780001343262/AnsiballZ_dnf.py'
Jan 26 16:10:30 compute-0 sudo[38289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:30 compute-0 python3.9[38291]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:10:33 compute-0 sudo[38289]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:33 compute-0 sudo[38442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmbiifepwoaujjrxnqeuawpfvrhodfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443833.1791918-342-38832672862351/AnsiballZ_file.py'
Jan 26 16:10:33 compute-0 sudo[38442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:33 compute-0 python3.9[38444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:10:33 compute-0 sudo[38442]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:34 compute-0 sudo[38594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxeeoxddsjloejkorrsxydtgtpqxtuia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443833.8631513-350-117280076249677/AnsiballZ_stat.py'
Jan 26 16:10:34 compute-0 sudo[38594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:34 compute-0 python3.9[38596]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:10:34 compute-0 sudo[38594]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:34 compute-0 sudo[38717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qexkhifbwoephjhcguaatoqksozzrdws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443833.8631513-350-117280076249677/AnsiballZ_copy.py'
Jan 26 16:10:34 compute-0 sudo[38717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:35 compute-0 python3.9[38719]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769443833.8631513-350-117280076249677/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:10:35 compute-0 sudo[38717]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:35 compute-0 sudo[38869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnffvfflbkflxxpaiqoygzvbzztfmtac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443835.21266-365-61444048756794/AnsiballZ_systemd.py'
Jan 26 16:10:35 compute-0 sudo[38869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:36 compute-0 python3.9[38871]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:10:36 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 16:10:36 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 26 16:10:36 compute-0 kernel: Bridge firewalling registered
Jan 26 16:10:36 compute-0 systemd-modules-load[38875]: Inserted module 'br_netfilter'
Jan 26 16:10:36 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 16:10:36 compute-0 sudo[38869]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:36 compute-0 sudo[39028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfrqyofedlycmkpwahegvyueppvkznw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443836.4962566-373-231450049975911/AnsiballZ_stat.py'
Jan 26 16:10:36 compute-0 sudo[39028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:36 compute-0 python3.9[39030]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:10:37 compute-0 sudo[39028]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:37 compute-0 sudo[39151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xufthxalgkydvonifasqovwhklfyfdnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443836.4962566-373-231450049975911/AnsiballZ_copy.py'
Jan 26 16:10:37 compute-0 sudo[39151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:37 compute-0 python3.9[39153]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769443836.4962566-373-231450049975911/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:10:37 compute-0 sudo[39151]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:38 compute-0 sudo[39303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqlgfgsrwwrwrxffqbdxlmhqiczzfffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443837.8993323-391-187704272995980/AnsiballZ_dnf.py'
Jan 26 16:10:38 compute-0 sudo[39303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:38 compute-0 python3.9[39305]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:10:42 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:10:42 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:10:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:10:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:10:42 compute-0 systemd[1]: Reloading.
Jan 26 16:10:42 compute-0 systemd-rc-local-generator[39372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:10:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:10:43 compute-0 sudo[39303]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:44 compute-0 python3.9[41161]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:10:45 compute-0 python3.9[42048]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 26 16:10:46 compute-0 python3.9[42818]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:10:46 compute-0 sudo[43473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumoemsilejazdasxcfuwihyjrujuljl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443846.424126-430-229554031620325/AnsiballZ_command.py'
Jan 26 16:10:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:10:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:10:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.146s CPU time.
Jan 26 16:10:46 compute-0 sudo[43473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:46 compute-0 systemd[1]: run-r995d5e2683544792ad563921b9777ad4.service: Deactivated successfully.
Jan 26 16:10:46 compute-0 python3.9[43476]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:47 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 16:10:47 compute-0 systemd[1]: Starting Authorization Manager...
Jan 26 16:10:47 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 16:10:47 compute-0 polkitd[43693]: Started polkitd version 0.117
Jan 26 16:10:47 compute-0 polkitd[43693]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 16:10:47 compute-0 polkitd[43693]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 16:10:47 compute-0 polkitd[43693]: Finished loading, compiling and executing 2 rules
Jan 26 16:10:47 compute-0 systemd[1]: Started Authorization Manager.
Jan 26 16:10:47 compute-0 polkitd[43693]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 26 16:10:47 compute-0 sudo[43473]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:47 compute-0 sudo[43861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxxvfyowjzskspzzwgyuqxsaipxfqlvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443847.6839676-439-155592401360955/AnsiballZ_systemd.py'
Jan 26 16:10:47 compute-0 sudo[43861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:48 compute-0 python3.9[43863]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:10:48 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 26 16:10:48 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 26 16:10:48 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 26 16:10:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 16:10:48 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 16:10:48 compute-0 sudo[43861]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:49 compute-0 python3.9[44024]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 26 16:10:51 compute-0 sudo[44174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmgatvdmnebofigojjgwndtxrxdzoyao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443850.9078653-496-130422099895458/AnsiballZ_systemd.py'
Jan 26 16:10:51 compute-0 sudo[44174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:51 compute-0 python3.9[44176]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:10:51 compute-0 systemd[1]: Reloading.
Jan 26 16:10:51 compute-0 systemd-rc-local-generator[44207]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:10:51 compute-0 sudo[44174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:52 compute-0 sudo[44364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xasfzqfanvqeplcbsyyhdsgevmjscwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443851.916041-496-15960729916786/AnsiballZ_systemd.py'
Jan 26 16:10:52 compute-0 sudo[44364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:52 compute-0 python3.9[44366]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:10:52 compute-0 systemd[1]: Reloading.
Jan 26 16:10:52 compute-0 systemd-rc-local-generator[44394]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:10:52 compute-0 sudo[44364]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:53 compute-0 sudo[44553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftcyptktymuviqefyixtjfmfynhcqjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443853.0078876-512-42409757914266/AnsiballZ_command.py'
Jan 26 16:10:53 compute-0 sudo[44553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:53 compute-0 python3.9[44555]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:53 compute-0 sudo[44553]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:54 compute-0 sudo[44706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzsplstbjnmgrrukiwouphvccbazboub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443853.7368155-520-46280691828321/AnsiballZ_command.py'
Jan 26 16:10:54 compute-0 sudo[44706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:54 compute-0 python3.9[44708]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:54 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 26 16:10:54 compute-0 sudo[44706]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:54 compute-0 sudo[44859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdtvlooxdjnfalwebdkionvktgapmqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443854.4041424-528-231269821020852/AnsiballZ_command.py'
Jan 26 16:10:54 compute-0 sudo[44859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:54 compute-0 python3.9[44861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:56 compute-0 sudo[44859]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:56 compute-0 sudo[45021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwhctxaurgdifswtkldjoecjobcknmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443856.5674872-536-198530195519207/AnsiballZ_command.py'
Jan 26 16:10:56 compute-0 sudo[45021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:57 compute-0 python3.9[45023]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:10:57 compute-0 sudo[45021]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:57 compute-0 sudo[45174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbuboerdjocihgcluxgdxaqgifmglbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443857.2380319-544-144579073726887/AnsiballZ_systemd.py'
Jan 26 16:10:57 compute-0 sudo[45174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:10:57 compute-0 python3.9[45176]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:10:57 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 16:10:57 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 26 16:10:57 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 26 16:10:57 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 26 16:10:57 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 16:10:57 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 26 16:10:57 compute-0 sudo[45174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:58 compute-0 sshd-session[31595]: Connection closed by 192.168.122.30 port 32894
Jan 26 16:10:58 compute-0 sshd-session[31592]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:10:58 compute-0 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Jan 26 16:10:58 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 26 16:10:58 compute-0 systemd[1]: session-10.scope: Consumed 2min 32.414s CPU time.
Jan 26 16:10:58 compute-0 systemd-logind[788]: Removed session 10.
Jan 26 16:11:04 compute-0 sshd-session[45206]: Accepted publickey for zuul from 192.168.122.30 port 47090 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:11:04 compute-0 systemd-logind[788]: New session 11 of user zuul.
Jan 26 16:11:04 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 26 16:11:04 compute-0 sshd-session[45206]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:11:05 compute-0 python3.9[45359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:11:06 compute-0 python3.9[45513]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:11:07 compute-0 sudo[45667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcqggkczlpaiiwlftyendbifdhtggyzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443867.1711733-45-148009242863802/AnsiballZ_command.py'
Jan 26 16:11:07 compute-0 sudo[45667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:07 compute-0 python3.9[45669]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:11:07 compute-0 sudo[45667]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:08 compute-0 python3.9[45820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:11:09 compute-0 sudo[45974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwfmklqhutvggbbzxygrzqdugeccwow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443869.1293187-65-279878427640300/AnsiballZ_setup.py'
Jan 26 16:11:09 compute-0 sudo[45974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:09 compute-0 python3.9[45976]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:11:10 compute-0 sudo[45974]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:10 compute-0 sudo[46058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmfjwatqckwkqjfchcafxtybibhbopf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443869.1293187-65-279878427640300/AnsiballZ_dnf.py'
Jan 26 16:11:10 compute-0 sudo[46058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:10 compute-0 python3.9[46060]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:11:12 compute-0 sudo[46058]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:12 compute-0 sudo[46211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zchjwkwmnmramuvsgwpccgdbabqnoytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443872.4659095-77-164478012744468/AnsiballZ_setup.py'
Jan 26 16:11:12 compute-0 sudo[46211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:13 compute-0 python3.9[46213]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:11:13 compute-0 sudo[46211]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:14 compute-0 sudo[46382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-medtyfqxwfcvalitjjwomdocyetkuckh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443873.5131004-88-204230374748944/AnsiballZ_file.py'
Jan 26 16:11:14 compute-0 sudo[46382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:14 compute-0 python3.9[46384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:11:14 compute-0 sudo[46382]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:14 compute-0 sudo[46534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddnnrulaapqrfmtbyirmhksctcnyitmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443874.5019743-96-224945998009049/AnsiballZ_command.py'
Jan 26 16:11:14 compute-0 sudo[46534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:14 compute-0 python3.9[46536]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:11:15 compute-0 podman[46537]: 2026-01-26 16:11:15.150646777 +0000 UTC m=+0.093360957 system refresh
Jan 26 16:11:15 compute-0 sudo[46534]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:15 compute-0 sudo[46697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinnuombshialiyphakmmzdbacfkppdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443875.366453-104-208203375823736/AnsiballZ_stat.py'
Jan 26 16:11:15 compute-0 sudo[46697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:15 compute-0 python3.9[46699]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:11:16 compute-0 sudo[46697]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:11:16 compute-0 sudo[46820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvvgueijojfcrwdznbbmzpoyebjxvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443875.366453-104-208203375823736/AnsiballZ_copy.py'
Jan 26 16:11:16 compute-0 sudo[46820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:16 compute-0 python3.9[46822]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769443875.366453-104-208203375823736/.source.json follow=False _original_basename=podman_network_config.j2 checksum=42c69222511273671e174a7e65a24c3de72488ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:11:16 compute-0 sudo[46820]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:17 compute-0 sudo[46972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cojuemzrcfqtnqqcxjeqtdjtgldtmxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443876.861912-119-28104204676709/AnsiballZ_stat.py'
Jan 26 16:11:17 compute-0 sudo[46972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:17 compute-0 python3.9[46974]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:11:17 compute-0 sudo[46972]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:17 compute-0 sudo[47095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvuqxjnokmjaukulggjmleylkolpgjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443876.861912-119-28104204676709/AnsiballZ_copy.py'
Jan 26 16:11:17 compute-0 sudo[47095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:17 compute-0 python3.9[47097]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769443876.861912-119-28104204676709/.source.conf follow=False _original_basename=registries.conf.j2 checksum=76a61c2dcef8c729f52de4ab2e4a413b55a36d10 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:11:17 compute-0 sudo[47095]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:18 compute-0 sudo[47247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgpqjhyvqmmcpcrrgrtlvnkytuguvhxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443878.1654775-135-265743998160268/AnsiballZ_ini_file.py'
Jan 26 16:11:18 compute-0 sudo[47247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:19 compute-0 python3.9[47249]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:11:19 compute-0 sudo[47247]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:19 compute-0 sudo[47399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwbnkzjmomdamzrjwkuatfxctgpjymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443879.3202827-135-18202448461627/AnsiballZ_ini_file.py'
Jan 26 16:11:19 compute-0 sudo[47399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:19 compute-0 python3.9[47401]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:11:19 compute-0 sudo[47399]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:20 compute-0 sudo[47551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvilixicsvgwbbrdjqucmyhuhbbhtzve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443879.9557204-135-264261172351689/AnsiballZ_ini_file.py'
Jan 26 16:11:20 compute-0 sudo[47551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:20 compute-0 python3.9[47553]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:11:20 compute-0 sudo[47551]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:20 compute-0 sudo[47703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awyawjcsmqfdxyjqtwpiixssgosbswqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443880.6184304-135-264571751222336/AnsiballZ_ini_file.py'
Jan 26 16:11:20 compute-0 sudo[47703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:21 compute-0 python3.9[47705]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:11:21 compute-0 sudo[47703]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:22 compute-0 python3.9[47855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:11:22 compute-0 sudo[48007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjavyomhbljpcaacvjedjumviabnwitr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443882.3146355-175-16793257492195/AnsiballZ_dnf.py'
Jan 26 16:11:22 compute-0 sudo[48007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:22 compute-0 python3.9[48009]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:24 compute-0 sudo[48007]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:24 compute-0 sudo[48160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugzuuswdfdftupqxrunvchkuchkxzmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443884.3713517-183-226482151905241/AnsiballZ_dnf.py'
Jan 26 16:11:24 compute-0 sudo[48160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:24 compute-0 python3.9[48162]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:26 compute-0 sudo[48160]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:27 compute-0 sudo[48320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsfqqghkcaplstuekjdlftbexhdmnia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443887.13995-193-150728983903942/AnsiballZ_dnf.py'
Jan 26 16:11:27 compute-0 sudo[48320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:27 compute-0 python3.9[48322]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:29 compute-0 sudo[48320]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:29 compute-0 sudo[48473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrmtfvkndlggolyrpsebkvqgbdcohxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443889.4111702-202-145291129550437/AnsiballZ_dnf.py'
Jan 26 16:11:29 compute-0 sudo[48473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:30 compute-0 python3.9[48475]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:31 compute-0 sudo[48473]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:32 compute-0 sudo[48626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekwjkmbgqccwtuxlcglagcntgvhzwlwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443891.7458923-213-125210816182744/AnsiballZ_dnf.py'
Jan 26 16:11:32 compute-0 sudo[48626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:32 compute-0 python3.9[48628]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:33 compute-0 sudo[48626]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:34 compute-0 sudo[48782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwrvvpzgohssilhnnmhicqaieognybeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443894.0499651-221-98479636065447/AnsiballZ_dnf.py'
Jan 26 16:11:34 compute-0 sudo[48782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:34 compute-0 python3.9[48784]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:37 compute-0 sudo[48782]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:37 compute-0 sudo[48951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooabrricfjnpjsokfhywiidzhnmyvdaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443897.722492-230-6260228499799/AnsiballZ_dnf.py'
Jan 26 16:11:37 compute-0 sudo[48951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:38 compute-0 python3.9[48953]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:39 compute-0 sudo[48951]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:40 compute-0 sudo[49104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectfugdhjpdgcrgjriqlghmrnieyfndt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443899.7459981-239-244790292196570/AnsiballZ_dnf.py'
Jan 26 16:11:40 compute-0 sudo[49104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:40 compute-0 python3.9[49106]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:50 compute-0 sudo[49104]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:51 compute-0 sudo[49440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsqhktsreojvveqlahjjpmmgzccpsxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443910.7023392-248-138868429990673/AnsiballZ_dnf.py'
Jan 26 16:11:51 compute-0 sudo[49440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:51 compute-0 python3.9[49442]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:52 compute-0 sudo[49440]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:53 compute-0 sudo[49596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlkipltfffaddnihuibgxwpqztrlhgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443913.1136446-258-60633364163939/AnsiballZ_dnf.py'
Jan 26 16:11:53 compute-0 sudo[49596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:53 compute-0 python3.9[49598]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:11:55 compute-0 sudo[49596]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:56 compute-0 sudo[49753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqrtkhbizgisnbgfzmuqsilcuazldex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443915.8963048-269-217178205987735/AnsiballZ_file.py'
Jan 26 16:11:56 compute-0 sudo[49753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:56 compute-0 python3.9[49755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:11:56 compute-0 sudo[49753]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:57 compute-0 sudo[49928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqgafubpmjvhkgdhruvdhjljtssrhabg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443916.8599706-277-140888144183078/AnsiballZ_stat.py'
Jan 26 16:11:57 compute-0 sudo[49928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:57 compute-0 python3.9[49930]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:11:57 compute-0 sudo[49928]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:57 compute-0 sudo[50051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqqopnbzryvkaqxujncxpkbmktffxlzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443916.8599706-277-140888144183078/AnsiballZ_copy.py'
Jan 26 16:11:57 compute-0 sudo[50051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:58 compute-0 python3.9[50053]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769443916.8599706-277-140888144183078/.source.json _original_basename=.t6vclljs follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:11:58 compute-0 sudo[50051]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:59 compute-0 sudo[50203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdvliqvgfgyzvgbzwmllqjltoxgifkos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443918.416401-295-5812511931217/AnsiballZ_podman_image.py'
Jan 26 16:11:59 compute-0 sudo[50203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:11:59 compute-0 python3.9[50205]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat945501607-lower\x2dmapped.mount: Deactivated successfully.
Jan 26 16:12:05 compute-0 podman[50218]: 2026-01-26 16:12:05.554091036 +0000 UTC m=+5.699723821 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 16:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:05 compute-0 sudo[50203]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:06 compute-0 sudo[50515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fihnvvfwxialjhxygavqarcdmwwguayq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443926.179524-306-222185270118663/AnsiballZ_podman_image.py'
Jan 26 16:12:06 compute-0 sudo[50515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:06 compute-0 python3.9[50517]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:11 compute-0 irqbalance[787]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 26 16:12:11 compute-0 irqbalance[787]: IRQ 27 affinity is now unmanaged
Jan 26 16:12:17 compute-0 podman[50530]: 2026-01-26 16:12:17.442894646 +0000 UTC m=+10.666288089 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:17 compute-0 sudo[50515]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:18 compute-0 sudo[50823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amweitxipaxzjeixaxtugvovaylncqqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443937.998018-316-258587147916127/AnsiballZ_podman_image.py'
Jan 26 16:12:18 compute-0 sudo[50823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:18 compute-0 python3.9[50825]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:30 compute-0 podman[50837]: 2026-01-26 16:12:30.944365445 +0000 UTC m=+12.352883937 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 16:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:31 compute-0 sudo[50823]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:31 compute-0 sudo[51100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlncpqxwaywtbtgbgdkyslogswwayyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443951.5774622-327-98848501422228/AnsiballZ_podman_image.py'
Jan 26 16:12:31 compute-0 sudo[51100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:32 compute-0 python3.9[51102]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:51 compute-0 podman[51114]: 2026-01-26 16:12:51.80009536 +0000 UTC m=+19.598065445 image pull 673eb625b19e37533ec15e219000c7d8233802c3ffa5adfdd7e8765ce31baf5c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 26 16:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:52 compute-0 sudo[51100]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:52 compute-0 sudo[51428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuksjvwagqojwykbqapsgqekpnlfxyjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443972.332251-327-184944227462234/AnsiballZ_podman_image.py'
Jan 26 16:12:52 compute-0 sudo[51428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:52 compute-0 python3.9[51430]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:54 compute-0 podman[51442]: 2026-01-26 16:12:54.015477638 +0000 UTC m=+1.099875427 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 26 16:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:54 compute-0 sudo[51428]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:54 compute-0 sudo[51716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trbkedkbxmvjsozpemcltzewpehecatg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443974.5246727-343-77822502963579/AnsiballZ_podman_image.py'
Jan 26 16:12:54 compute-0 sudo[51716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:55 compute-0 python3.9[51718]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:57 compute-0 podman[51730]: 2026-01-26 16:12:57.939668752 +0000 UTC m=+2.856957238 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 26 16:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:12:58 compute-0 sudo[51716]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:58 compute-0 sudo[51983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jggcufyqskzeblmfkpezthbbpdijugrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443978.2831843-343-197691333527379/AnsiballZ_podman_image.py'
Jan 26 16:12:58 compute-0 sudo[51983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:12:58 compute-0 python3.9[51985]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 26 16:13:07 compute-0 podman[51998]: 2026-01-26 16:13:07.251289489 +0000 UTC m=+8.456433958 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 26 16:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:13:07 compute-0 sudo[51983]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:08 compute-0 sshd-session[45209]: Connection closed by 192.168.122.30 port 47090
Jan 26 16:13:08 compute-0 sshd-session[45206]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:13:08 compute-0 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Jan 26 16:13:08 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 26 16:13:08 compute-0 systemd[1]: session-11.scope: Consumed 2min 32.823s CPU time.
Jan 26 16:13:08 compute-0 systemd-logind[788]: Removed session 11.
Jan 26 16:13:13 compute-0 sshd-session[52258]: Accepted publickey for zuul from 192.168.122.30 port 36054 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:13:13 compute-0 systemd-logind[788]: New session 12 of user zuul.
Jan 26 16:13:13 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 26 16:13:13 compute-0 sshd-session[52258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:13:14 compute-0 python3.9[52411]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:13:15 compute-0 sudo[52565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodvxpdryoybhttpmfrllrgtcxbencbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443995.5172043-31-10552804450123/AnsiballZ_getent.py'
Jan 26 16:13:15 compute-0 sudo[52565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:16 compute-0 python3.9[52567]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 26 16:13:16 compute-0 sudo[52565]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:16 compute-0 sudo[52718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbazgmqjuafggzbxzrkctgangsmpxxsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443996.3056269-39-175290762127318/AnsiballZ_group.py'
Jan 26 16:13:16 compute-0 sudo[52718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:17 compute-0 python3.9[52720]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:13:17 compute-0 groupadd[52721]: group added to /etc/group: name=openvswitch, GID=42476
Jan 26 16:13:17 compute-0 groupadd[52721]: group added to /etc/gshadow: name=openvswitch
Jan 26 16:13:17 compute-0 groupadd[52721]: new group: name=openvswitch, GID=42476
Jan 26 16:13:17 compute-0 sudo[52718]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:17 compute-0 sudo[52876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryspahudxqsnceqyvenbhiftgxlczggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443997.2652605-47-210796512931354/AnsiballZ_user.py'
Jan 26 16:13:17 compute-0 sudo[52876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:18 compute-0 python3.9[52878]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 16:13:18 compute-0 useradd[52880]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 16:13:18 compute-0 useradd[52880]: add 'openvswitch' to group 'hugetlbfs'
Jan 26 16:13:18 compute-0 useradd[52880]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 26 16:13:18 compute-0 sudo[52876]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:18 compute-0 sudo[53036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teeraullmlmbbgmhqkntsjfygutasvup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443998.534048-57-90405551306749/AnsiballZ_setup.py'
Jan 26 16:13:18 compute-0 sudo[53036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:19 compute-0 python3.9[53038]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:13:19 compute-0 sudo[53036]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:20 compute-0 sudo[53120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqfgdxpxacymyfkkhecpvdpxnmfjkfhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769443998.534048-57-90405551306749/AnsiballZ_dnf.py'
Jan 26 16:13:20 compute-0 sudo[53120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:21 compute-0 python3.9[53122]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:13:22 compute-0 sudo[53120]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:23 compute-0 sudo[53282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkeivyuyjmwijyhrreftcownbqzliwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444003.0740533-71-139593231187947/AnsiballZ_dnf.py'
Jan 26 16:13:23 compute-0 sudo[53282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:23 compute-0 python3.9[53284]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:13:37 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:13:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:13:37 compute-0 groupadd[53307]: group added to /etc/group: name=unbound, GID=994
Jan 26 16:13:37 compute-0 groupadd[53307]: group added to /etc/gshadow: name=unbound
Jan 26 16:13:37 compute-0 groupadd[53307]: new group: name=unbound, GID=994
Jan 26 16:13:37 compute-0 useradd[53314]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 26 16:13:37 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 26 16:13:37 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 26 16:13:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:13:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:13:38 compute-0 systemd[1]: Reloading.
Jan 26 16:13:39 compute-0 systemd-rc-local-generator[53812]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:13:39 compute-0 systemd-sysv-generator[53815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:13:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:13:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:13:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:13:39 compute-0 systemd[1]: run-r7d1aa923f5534961b25e56d7ed901b6f.service: Deactivated successfully.
Jan 26 16:13:39 compute-0 sudo[53282]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:40 compute-0 sudo[54379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmebgxnevsyrrgyfslitpojiinporfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444020.0389028-79-241933048110855/AnsiballZ_systemd.py'
Jan 26 16:13:40 compute-0 sudo[54379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:41 compute-0 python3.9[54381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:13:41 compute-0 systemd[1]: Reloading.
Jan 26 16:13:41 compute-0 systemd-rc-local-generator[54409]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:13:41 compute-0 systemd-sysv-generator[54414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:13:41 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 26 16:13:41 compute-0 chown[54422]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 26 16:13:41 compute-0 ovs-ctl[54427]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 26 16:13:41 compute-0 ovs-ctl[54427]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 26 16:13:41 compute-0 ovs-ctl[54427]: Starting ovsdb-server [  OK  ]
Jan 26 16:13:41 compute-0 ovs-vsctl[54476]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 26 16:13:41 compute-0 ovs-vsctl[54495]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1c72c11d-5050-47c3-89e8-912766588fb3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 26 16:13:41 compute-0 ovs-ctl[54427]: Configuring Open vSwitch system IDs [  OK  ]
Jan 26 16:13:41 compute-0 ovs-vsctl[54501]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 16:13:41 compute-0 ovs-ctl[54427]: Enabling remote OVSDB managers [  OK  ]
Jan 26 16:13:41 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 26 16:13:41 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 26 16:13:41 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 26 16:13:41 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 26 16:13:41 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 26 16:13:41 compute-0 ovs-ctl[54545]: Inserting openvswitch module [  OK  ]
Jan 26 16:13:41 compute-0 ovs-ctl[54514]: Starting ovs-vswitchd [  OK  ]
Jan 26 16:13:41 compute-0 ovs-ctl[54514]: Enabling remote OVSDB managers [  OK  ]
Jan 26 16:13:41 compute-0 ovs-vsctl[54562]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 16:13:41 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 26 16:13:41 compute-0 systemd[1]: Starting Open vSwitch...
Jan 26 16:13:41 compute-0 systemd[1]: Finished Open vSwitch.
Jan 26 16:13:41 compute-0 sudo[54379]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:42 compute-0 python3.9[54714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:13:43 compute-0 sudo[54864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhgpbmgqzbgnrjjqhlhzfsfoxdpmitnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444023.0194142-97-41018830355351/AnsiballZ_sefcontext.py'
Jan 26 16:13:43 compute-0 sudo[54864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:43 compute-0 python3.9[54866]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 26 16:13:44 compute-0 kernel: SELinux:  Converting 2752 SID table entries...
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:13:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:13:45 compute-0 sudo[54864]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:46 compute-0 python3.9[55021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:13:46 compute-0 sudo[55177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzyuvylriorpbttbakohfkrztxcsmrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444026.5191324-115-30337480784788/AnsiballZ_dnf.py'
Jan 26 16:13:46 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 26 16:13:46 compute-0 sudo[55177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:47 compute-0 python3.9[55179]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:13:48 compute-0 sudo[55177]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:49 compute-0 sudo[55330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibhfbxwpryxcezuaytaaallfyslwwvlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444028.6429188-123-270508265889439/AnsiballZ_command.py'
Jan 26 16:13:49 compute-0 sudo[55330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:49 compute-0 python3.9[55332]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:13:49 compute-0 sudo[55330]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:50 compute-0 sudo[55617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amcbzbmdiobkjzgolbnlnjkrbpgjdjzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444030.1406517-131-255167721733957/AnsiballZ_file.py'
Jan 26 16:13:50 compute-0 sudo[55617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:50 compute-0 python3.9[55619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 26 16:13:50 compute-0 sudo[55617]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:51 compute-0 python3.9[55769]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:13:52 compute-0 sudo[55921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhxsdrmsrbdhnvedpbpbbpwlzrnhklvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444031.807148-147-234383504330402/AnsiballZ_dnf.py'
Jan 26 16:13:52 compute-0 sudo[55921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:52 compute-0 python3.9[55923]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:13:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:13:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:13:54 compute-0 systemd[1]: Reloading.
Jan 26 16:13:54 compute-0 systemd-rc-local-generator[55962]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:13:54 compute-0 systemd-sysv-generator[55965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:13:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:13:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:13:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:13:54 compute-0 systemd[1]: run-r82afc647091e43089a74af88496e1d29.service: Deactivated successfully.
Jan 26 16:13:54 compute-0 sudo[55921]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:55 compute-0 sudo[56238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikhmjcfqgspyacunlwjyiuehybloknxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444034.9940205-155-235990440435744/AnsiballZ_systemd.py'
Jan 26 16:13:55 compute-0 sudo[56238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:55 compute-0 python3.9[56240]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:13:55 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 16:13:55 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 26 16:13:55 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6730] caught SIGTERM, shutting down normally.
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6743] dhcp4 (eth0): canceled DHCP transaction
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6743] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6743] dhcp4 (eth0): state changed no lease
Jan 26 16:13:55 compute-0 systemd[1]: Stopping Network Manager...
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6745] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 16:13:55 compute-0 NetworkManager[7193]: <info>  [1769444035.6802] exiting (success)
Jan 26 16:13:55 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 16:13:55 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 16:13:55 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 16:13:55 compute-0 systemd[1]: Stopped Network Manager.
Jan 26 16:13:55 compute-0 systemd[1]: NetworkManager.service: Consumed 20.134s CPU time, 4.1M memory peak, read 0B from disk, written 28.5K to disk.
Jan 26 16:13:55 compute-0 systemd[1]: Starting Network Manager...
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.7561] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:5124ce38-efa8-40f4-a4ab-032935f2d131)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.7563] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.7629] manager[0x55ae5c296000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 16:13:55 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 16:13:55 compute-0 systemd[1]: Started Hostname Service.
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8439] hostname: hostname: using hostnamed
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8440] hostname: static hostname changed from (none) to "compute-0"
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8444] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8448] manager[0x55ae5c296000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8449] manager[0x55ae5c296000]: rfkill: WWAN hardware radio set enabled
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8467] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8476] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8476] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8477] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8477] manager: Networking is enabled by state file
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8479] settings: Loaded settings plugin: keyfile (internal)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8482] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8504] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8512] dhcp: init: Using DHCP client 'internal'
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8514] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8518] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8522] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8528] device (lo): Activation: starting connection 'lo' (8f11ff48-691a-496d-8a19-1570898b30be)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8534] device (eth0): carrier: link connected
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8537] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8541] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8542] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8547] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8552] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8556] device (eth1): carrier: link connected
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8560] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8563] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (029c721f-b037-502e-8185-a257ece4e436) (indicated)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8564] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8569] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8575] device (eth1): Activation: starting connection 'ci-private-network' (029c721f-b037-502e-8185-a257ece4e436)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8580] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 16:13:55 compute-0 systemd[1]: Started Network Manager.
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8587] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8590] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8592] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8594] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8596] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8597] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8599] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8601] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8605] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8607] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8623] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8636] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8660] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8661] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8662] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8667] device (lo): Activation: successful, device activated.
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8674] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8676] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8679] device (eth1): Activation: successful, device activated.
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8686] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8692] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 16:13:55 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8746] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8762] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8763] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8766] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8769] device (eth0): Activation: successful, device activated.
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8773] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 16:13:55 compute-0 NetworkManager[56253]: <info>  [1769444035.8776] manager: startup complete
Jan 26 16:13:55 compute-0 sudo[56238]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:55 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 26 16:13:56 compute-0 sudo[56464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amglewfecldfixejvlxyuhuscojlfhzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444036.087538-163-57141949488678/AnsiballZ_dnf.py'
Jan 26 16:13:56 compute-0 sudo[56464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:13:56 compute-0 python3.9[56466]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:14:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:14:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:14:01 compute-0 systemd[1]: Reloading.
Jan 26 16:14:01 compute-0 systemd-sysv-generator[56521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:14:01 compute-0 systemd-rc-local-generator[56517]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:14:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:14:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:14:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:14:02 compute-0 systemd[1]: run-r04c6853ce25d4b13a63925ffe4b3ec93.service: Deactivated successfully.
Jan 26 16:14:02 compute-0 sudo[56464]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:03 compute-0 sudo[56922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpbmckcswpemvfsyeoijcmdqsvcjzpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444043.6426558-175-98294302652012/AnsiballZ_stat.py'
Jan 26 16:14:03 compute-0 sudo[56922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:04 compute-0 python3.9[56924]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:14:04 compute-0 sudo[56922]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:04 compute-0 sudo[57074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upojjynuhhmmxrspxjcrpmdewzsjezii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444044.348962-184-179964526638612/AnsiballZ_ini_file.py'
Jan 26 16:14:04 compute-0 sudo[57074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:05 compute-0 python3.9[57076]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:05 compute-0 sudo[57074]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:05 compute-0 sudo[57228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixmglrshwjmvzmocrxugoumhlnigfmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444045.346105-194-96464259488348/AnsiballZ_ini_file.py'
Jan 26 16:14:05 compute-0 sudo[57228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:05 compute-0 python3.9[57230]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:05 compute-0 sudo[57228]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:06 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 16:14:06 compute-0 sudo[57380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgbjkajssmmhejkqggutrwmjhbanplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444046.0394604-194-228159441889782/AnsiballZ_ini_file.py'
Jan 26 16:14:06 compute-0 sudo[57380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:06 compute-0 python3.9[57382]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:06 compute-0 sudo[57380]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:07 compute-0 sudo[57532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhjkvmjlgeqqbnuxqnwtghidwrapjdkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444046.830064-209-251866978959552/AnsiballZ_ini_file.py'
Jan 26 16:14:07 compute-0 sudo[57532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:07 compute-0 python3.9[57534]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:07 compute-0 sudo[57532]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:07 compute-0 sudo[57684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrobqlggamuflkltfjupofowyhlzett ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444047.582751-209-241696304509995/AnsiballZ_ini_file.py'
Jan 26 16:14:07 compute-0 sudo[57684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:08 compute-0 python3.9[57686]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:08 compute-0 sudo[57684]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:08 compute-0 sudo[57836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohulkgfuimcvlsjklbjcrcjvkctgejb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444048.233346-224-31928091713863/AnsiballZ_stat.py'
Jan 26 16:14:08 compute-0 sudo[57836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:08 compute-0 python3.9[57838]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:14:08 compute-0 sudo[57836]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:09 compute-0 sudo[57959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckurqwrptevaeqkailsicwesynlriafk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444048.233346-224-31928091713863/AnsiballZ_copy.py'
Jan 26 16:14:09 compute-0 sudo[57959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:09 compute-0 python3.9[57961]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444048.233346-224-31928091713863/.source _original_basename=.g8ytfrf8 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:09 compute-0 sudo[57959]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:10 compute-0 sudo[58111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqxivrsviseugpvzbyypbljxtwkivan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444049.764735-239-50927016954617/AnsiballZ_file.py'
Jan 26 16:14:10 compute-0 sudo[58111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:10 compute-0 python3.9[58113]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:10 compute-0 sudo[58111]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:11 compute-0 sudo[58263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgjgtgofczaoxhrblwiycweiyciowlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444050.4553967-247-89535236228840/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 26 16:14:11 compute-0 sudo[58263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:11 compute-0 python3.9[58265]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 26 16:14:11 compute-0 sudo[58263]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:11 compute-0 sudo[58415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aytvjswzagdeasnpqcattlbqldggmjof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444051.4972517-256-60701531911815/AnsiballZ_file.py'
Jan 26 16:14:11 compute-0 sudo[58415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:11 compute-0 python3.9[58417]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:11 compute-0 sudo[58415]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:12 compute-0 sudo[58567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzocmqoxurmkxzdumbnxwzmabbbkhvbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444052.2617893-266-80570266457598/AnsiballZ_stat.py'
Jan 26 16:14:12 compute-0 sudo[58567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:12 compute-0 sudo[58567]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:13 compute-0 sudo[58690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkxijflpdfgfkidhyghjzesoluzxwyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444052.2617893-266-80570266457598/AnsiballZ_copy.py'
Jan 26 16:14:13 compute-0 sudo[58690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:13 compute-0 sudo[58690]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:13 compute-0 sudo[58842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfkzuojreoulqrneyvesppdgzjrzetrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444053.4626982-281-164642164075843/AnsiballZ_slurp.py'
Jan 26 16:14:13 compute-0 sudo[58842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:14 compute-0 python3.9[58844]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 26 16:14:14 compute-0 sudo[58842]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:15 compute-0 sudo[59017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhcgkeovniopqrymryhncnmbbrbvlxkp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444054.3770463-290-224126237868054/async_wrapper.py j674375679921 300 /home/zuul/.ansible/tmp/ansible-tmp-1769444054.3770463-290-224126237868054/AnsiballZ_edpm_os_net_config.py _'
Jan 26 16:14:15 compute-0 sudo[59017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:15 compute-0 ansible-async_wrapper.py[59019]: Invoked with j674375679921 300 /home/zuul/.ansible/tmp/ansible-tmp-1769444054.3770463-290-224126237868054/AnsiballZ_edpm_os_net_config.py _
Jan 26 16:14:15 compute-0 ansible-async_wrapper.py[59022]: Starting module and watcher
Jan 26 16:14:15 compute-0 ansible-async_wrapper.py[59022]: Start watching 59023 (300)
Jan 26 16:14:15 compute-0 ansible-async_wrapper.py[59023]: Start module (59023)
Jan 26 16:14:15 compute-0 ansible-async_wrapper.py[59019]: Return async_wrapper task started.
Jan 26 16:14:15 compute-0 sudo[59017]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:15 compute-0 python3.9[59024]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 26 16:14:16 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 26 16:14:16 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 26 16:14:16 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 26 16:14:16 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 26 16:14:16 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.2339] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.2360] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3053] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3055] audit: op="connection-add" uuid="f0d236e8-6161-47ed-97d9-531382636f9c" name="br-ex-br" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3078] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3081] audit: op="connection-add" uuid="9bf46dc5-77e4-44ff-8c8a-e2e1c5aae645" name="br-ex-port" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3102] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3104] audit: op="connection-add" uuid="54de8aca-2504-4423-a46b-72c373eb2f19" name="eth1-port" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3126] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3129] audit: op="connection-add" uuid="47521aef-ff8a-4866-88bb-b12de57abaf4" name="vlan20-port" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3154] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3157] audit: op="connection-add" uuid="2d1dd59f-6b46-41d3-9f71-060256efd14c" name="vlan21-port" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3181] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3184] audit: op="connection-add" uuid="15ea1c67-39dc-45f5-9c09-58c183c2dc66" name="vlan22-port" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3223] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3300] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3302] audit: op="connection-add" uuid="526b20da-f37e-42bc-9fc6-af732ddef5a4" name="br-ex-if" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3383] audit: op="connection-update" uuid="029c721f-b037-502e-8185-a257ece4e436" name="ci-private-network" args="ipv4.routing-rules,ipv4.never-default,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.addresses,ipv6.method,ovs-interface.type,ovs-external-ids.data,connection.timestamp,connection.master,connection.slave-type,connection.port-type,connection.controller" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3415] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3417] audit: op="connection-add" uuid="d417260f-fcb7-4313-a97e-9b28a300ad70" name="vlan20-if" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3454] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3457] audit: op="connection-add" uuid="2090cdd3-c8b3-41ed-b57f-7f042ba63df1" name="vlan21-if" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3491] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3494] audit: op="connection-add" uuid="bf719339-b8ed-4583-81cb-2d9aff546c71" name="vlan22-if" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3517] audit: op="connection-delete" uuid="d2f7cd60-7192-331c-9fdd-34ee6dbab928" name="Wired connection 1" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3543] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3547] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3561] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3570] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f0d236e8-6161-47ed-97d9-531382636f9c)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3571] audit: op="connection-activate" uuid="f0d236e8-6161-47ed-97d9-531382636f9c" name="br-ex-br" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3575] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3578] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3588] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3598] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9bf46dc5-77e4-44ff-8c8a-e2e1c5aae645)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3602] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3604] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3612] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3619] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (54de8aca-2504-4423-a46b-72c373eb2f19)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3623] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3625] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3635] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3643] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (47521aef-ff8a-4866-88bb-b12de57abaf4)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3646] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3648] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3657] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3666] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2d1dd59f-6b46-41d3-9f71-060256efd14c)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3670] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3673] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3805] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3810] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (15ea1c67-39dc-45f5-9c09-58c183c2dc66)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3811] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3814] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3816] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3824] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3825] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3829] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3834] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (526b20da-f37e-42bc-9fc6-af732ddef5a4)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3835] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3839] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3841] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3843] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3844] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3857] device (eth1): disconnecting for new activation request.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3858] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3862] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3864] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3865] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3868] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3870] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3874] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3879] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (d417260f-fcb7-4313-a97e-9b28a300ad70)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3880] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3884] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3886] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3887] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3891] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3892] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3897] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3902] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2090cdd3-c8b3-41ed-b57f-7f042ba63df1)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3903] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3907] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3910] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3912] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3915] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <warn>  [1769444057.3916] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3921] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3926] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (bf719339-b8ed-4583-81cb-2d9aff546c71)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3927] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3931] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3933] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3935] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3936] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3953] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3955] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3959] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3961] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3969] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3974] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3979] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3983] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3985] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3992] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.3996] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4000] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4002] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4009] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4013] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4016] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4017] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4024] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4030] dhcp4 (eth0): canceled DHCP transaction
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4030] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 16:14:17 compute-0 systemd-udevd[59029]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:14:17 compute-0 kernel: Timeout policy base is empty
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4030] dhcp4 (eth0): state changed no lease
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4032] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4043] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4046] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59025 uid=0 result="fail" reason="Device is not activated"
Jan 26 16:14:17 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4086] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4092] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4138] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4150] device (eth1): disconnecting for new activation request.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4151] audit: op="connection-activate" uuid="029c721f-b037-502e-8185-a257ece4e436" name="ci-private-network" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4152] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4182] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59025 uid=0 result="success"
Jan 26 16:14:17 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4266] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4377] device (eth1): Activation: starting connection 'ci-private-network' (029c721f-b037-502e-8185-a257ece4e436)
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4389] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4394] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4401] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4403] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4405] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4407] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4409] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4411] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4425] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4434] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4439] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4444] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4449] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4453] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4459] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4464] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4468] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4473] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4478] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4483] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4487] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4494] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4501] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 kernel: br-ex: entered promiscuous mode
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4545] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4547] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4553] device (eth1): Activation: successful, device activated.
Jan 26 16:14:17 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4695] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4709] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 kernel: vlan22: entered promiscuous mode
Jan 26 16:14:17 compute-0 systemd-udevd[59030]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4745] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4746] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4749] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 kernel: vlan21: entered promiscuous mode
Jan 26 16:14:17 compute-0 kernel: vlan20: entered promiscuous mode
Jan 26 16:14:17 compute-0 systemd-udevd[59031]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4888] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4903] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4925] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4941] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4957] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4960] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4968] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4981] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4983] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.4991] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.5041] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.5057] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.5075] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.5077] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 16:14:17 compute-0 NetworkManager[56253]: <info>  [1769444057.5083] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 16:14:18 compute-0 NetworkManager[56253]: <info>  [1769444058.6141] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59025 uid=0 result="success"
Jan 26 16:14:18 compute-0 NetworkManager[56253]: <info>  [1769444058.7423] checkpoint[0x55ae5c26b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 26 16:14:18 compute-0 NetworkManager[56253]: <info>  [1769444058.7426] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.0440] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.0462] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 sudo[59358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulircgjgvxjoxnugkullyxfqcghvmpyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444058.6030362-290-67711968132773/AnsiballZ_async_status.py'
Jan 26 16:14:19 compute-0 sudo[59358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.2333] audit: op="networking-control" arg="global-dns-configuration" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.2359] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.2392] audit: op="networking-control" arg="global-dns-configuration" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.2415] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 python3.9[59360]: ansible-ansible.legacy.async_status Invoked with jid=j674375679921.59019 mode=status _async_dir=/root/.ansible_async
Jan 26 16:14:19 compute-0 sudo[59358]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.3657] checkpoint[0x55ae5c26ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 26 16:14:19 compute-0 NetworkManager[56253]: <info>  [1769444059.3661] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59025 uid=0 result="success"
Jan 26 16:14:19 compute-0 ansible-async_wrapper.py[59023]: Module complete (59023)
Jan 26 16:14:20 compute-0 ansible-async_wrapper.py[59022]: Done in kid B.
Jan 26 16:14:22 compute-0 sudo[59462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hppnutkkizvcdjlqenkfjtbagqmqdsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444058.6030362-290-67711968132773/AnsiballZ_async_status.py'
Jan 26 16:14:22 compute-0 sudo[59462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:22 compute-0 python3.9[59464]: ansible-ansible.legacy.async_status Invoked with jid=j674375679921.59019 mode=status _async_dir=/root/.ansible_async
Jan 26 16:14:22 compute-0 sudo[59462]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:22 compute-0 sudo[59562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadwzndughpocafhdigkmwnlwhtusoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444058.6030362-290-67711968132773/AnsiballZ_async_status.py'
Jan 26 16:14:22 compute-0 sudo[59562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:23 compute-0 python3.9[59564]: ansible-ansible.legacy.async_status Invoked with jid=j674375679921.59019 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 16:14:23 compute-0 sudo[59562]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:23 compute-0 sudo[59714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fufyzmfqhlrpysvubvbsmenmwhaxrtdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444063.4020624-317-82527378006839/AnsiballZ_stat.py'
Jan 26 16:14:23 compute-0 sudo[59714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:23 compute-0 python3.9[59716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:14:23 compute-0 sudo[59714]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:24 compute-0 sudo[59837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eadesnqthhgfaxhivnbzmvilemazwzzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444063.4020624-317-82527378006839/AnsiballZ_copy.py'
Jan 26 16:14:24 compute-0 sudo[59837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:24 compute-0 python3.9[59839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444063.4020624-317-82527378006839/.source.returncode _original_basename=.c8sc84fw follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:24 compute-0 sudo[59837]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:25 compute-0 sudo[59989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkuhxpguwnrimiuxuiayvtbfxwbpvgeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444064.7677088-333-105323762530116/AnsiballZ_stat.py'
Jan 26 16:14:25 compute-0 sudo[59989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:25 compute-0 python3.9[59991]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:14:25 compute-0 sudo[59989]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:25 compute-0 sudo[60112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcrhzrumzptoafugdtxpeylgcljykxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444064.7677088-333-105323762530116/AnsiballZ_copy.py'
Jan 26 16:14:25 compute-0 sudo[60112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:25 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 16:14:26 compute-0 python3.9[60114]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444064.7677088-333-105323762530116/.source.cfg _original_basename=.r_upqg19 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:26 compute-0 sudo[60112]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:26 compute-0 sudo[60268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfftwwppxonckopfixqghvygamelduf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444066.2223587-348-180302198185317/AnsiballZ_systemd.py'
Jan 26 16:14:26 compute-0 sudo[60268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:26 compute-0 python3.9[60270]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:14:26 compute-0 systemd[1]: Reloading Network Manager...
Jan 26 16:14:26 compute-0 NetworkManager[56253]: <info>  [1769444066.9123] audit: op="reload" arg="0" pid=60274 uid=0 result="success"
Jan 26 16:14:26 compute-0 NetworkManager[56253]: <info>  [1769444066.9132] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 26 16:14:26 compute-0 systemd[1]: Reloaded Network Manager.
Jan 26 16:14:26 compute-0 sudo[60268]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:27 compute-0 sshd-session[52261]: Connection closed by 192.168.122.30 port 36054
Jan 26 16:14:27 compute-0 sshd-session[52258]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:14:27 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 26 16:14:27 compute-0 systemd[1]: session-12.scope: Consumed 51.052s CPU time.
Jan 26 16:14:27 compute-0 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Jan 26 16:14:27 compute-0 systemd-logind[788]: Removed session 12.
Jan 26 16:14:33 compute-0 sshd-session[60305]: Accepted publickey for zuul from 192.168.122.30 port 46010 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:14:33 compute-0 systemd-logind[788]: New session 13 of user zuul.
Jan 26 16:14:33 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 26 16:14:33 compute-0 sshd-session[60305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:14:34 compute-0 python3.9[60458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:14:36 compute-0 python3.9[60613]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:14:36 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 16:14:37 compute-0 python3.9[60803]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:14:38 compute-0 sshd-session[60308]: Connection closed by 192.168.122.30 port 46010
Jan 26 16:14:38 compute-0 sshd-session[60305]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:14:38 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 26 16:14:38 compute-0 systemd[1]: session-13.scope: Consumed 2.284s CPU time.
Jan 26 16:14:38 compute-0 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Jan 26 16:14:38 compute-0 systemd-logind[788]: Removed session 13.
Jan 26 16:14:44 compute-0 sshd-session[60831]: Accepted publickey for zuul from 192.168.122.30 port 50472 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:14:44 compute-0 systemd-logind[788]: New session 14 of user zuul.
Jan 26 16:14:44 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 26 16:14:44 compute-0 sshd-session[60831]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:14:45 compute-0 python3.9[60985]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:14:46 compute-0 python3.9[61139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:14:47 compute-0 sudo[61293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dilxpwpxcvoyhloduwintzufrwovwmoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444087.2867596-35-186981350661337/AnsiballZ_setup.py'
Jan 26 16:14:47 compute-0 sudo[61293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:47 compute-0 python3.9[61295]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:14:48 compute-0 sudo[61293]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:48 compute-0 sudo[61378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vduvipiltynzmgevnodilnpqfpuksdsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444087.2867596-35-186981350661337/AnsiballZ_dnf.py'
Jan 26 16:14:48 compute-0 sudo[61378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:48 compute-0 python3.9[61380]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:14:50 compute-0 sudo[61378]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:50 compute-0 sudo[61531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwcqqwbqtaspfgmvfevlzovysjcdufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444090.5181708-47-273873189955211/AnsiballZ_setup.py'
Jan 26 16:14:50 compute-0 sudo[61531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:51 compute-0 python3.9[61533]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:14:51 compute-0 sudo[61531]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:52 compute-0 sudo[61722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbljhvtletdggdiayjhrvlcrpcqpwukr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444091.8047543-58-8335057069000/AnsiballZ_file.py'
Jan 26 16:14:52 compute-0 sudo[61722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:52 compute-0 python3.9[61724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:52 compute-0 sudo[61722]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:53 compute-0 sudo[61874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhuaepnsiekymaomdghhezfabrmfsibd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444092.6146724-66-276703173932804/AnsiballZ_command.py'
Jan 26 16:14:53 compute-0 sudo[61874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:53 compute-0 python3.9[61876]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:14:53 compute-0 sudo[61874]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:54 compute-0 sudo[62038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scysajgdtmhmvubemdhbttzqwflzbeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444093.7696245-74-280584893052038/AnsiballZ_stat.py'
Jan 26 16:14:54 compute-0 sudo[62038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:54 compute-0 python3.9[62040]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:14:54 compute-0 sudo[62038]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:54 compute-0 sudo[62116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indfvkfxrnwoshvazmmfsstrwqawvlin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444093.7696245-74-280584893052038/AnsiballZ_file.py'
Jan 26 16:14:54 compute-0 sudo[62116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:54 compute-0 python3.9[62118]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:14:54 compute-0 sudo[62116]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:55 compute-0 sudo[62268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtrvqfkjvyrldasaqegqtxiaqqxrgqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444095.1244967-86-89324061036513/AnsiballZ_stat.py'
Jan 26 16:14:55 compute-0 sudo[62268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:55 compute-0 python3.9[62270]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:14:55 compute-0 sudo[62268]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:56 compute-0 sudo[62346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqnpqyrvgieipovuekuaydnctkxdwvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444095.1244967-86-89324061036513/AnsiballZ_file.py'
Jan 26 16:14:56 compute-0 sudo[62346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:56 compute-0 python3.9[62348]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:14:56 compute-0 sudo[62346]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:56 compute-0 sudo[62498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqugyugbmivsghynpxdbqwaillmpjlwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444096.4957142-99-8550268081950/AnsiballZ_ini_file.py'
Jan 26 16:14:56 compute-0 sudo[62498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:57 compute-0 python3.9[62500]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:14:57 compute-0 sudo[62498]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:57 compute-0 sudo[62650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seiozayfxoxdqexoivzoxamsvuqqlebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444097.2813811-99-185908761064423/AnsiballZ_ini_file.py'
Jan 26 16:14:57 compute-0 sudo[62650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:57 compute-0 python3.9[62652]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:14:57 compute-0 sudo[62650]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:58 compute-0 sudo[62802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eplbytzjsxjeqigscomvipgaiajfdhhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444097.849379-99-121672164322866/AnsiballZ_ini_file.py'
Jan 26 16:14:58 compute-0 sudo[62802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:58 compute-0 python3.9[62804]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:14:58 compute-0 sudo[62802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:58 compute-0 sudo[62954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aacunosiumrtythotymacravfywvnwcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444098.4542975-99-207104000061645/AnsiballZ_ini_file.py'
Jan 26 16:14:58 compute-0 sudo[62954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:58 compute-0 python3.9[62956]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:14:58 compute-0 sudo[62954]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:59 compute-0 sudo[63106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgklcfpbzjpxhkuwikqnjbeoinehzoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444099.227005-130-30785163390099/AnsiballZ_dnf.py'
Jan 26 16:14:59 compute-0 sudo[63106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:14:59 compute-0 python3.9[63108]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:15:01 compute-0 sudo[63106]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:02 compute-0 sudo[63259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrybkrinectznnwwpinjqgrjhkiitnul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444101.4660537-141-278059313848789/AnsiballZ_setup.py'
Jan 26 16:15:02 compute-0 sudo[63259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:02 compute-0 python3.9[63261]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:15:02 compute-0 sudo[63259]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:03 compute-0 sudo[63413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gozffukozntlhztqaavgzwfgtymyntvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444102.8532147-149-208154132349566/AnsiballZ_stat.py'
Jan 26 16:15:03 compute-0 sudo[63413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:03 compute-0 python3.9[63415]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:15:03 compute-0 sudo[63413]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:03 compute-0 sudo[63565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uukafhhtqsfysjzauqpikdfzgegcqqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444103.613387-158-183007224059374/AnsiballZ_stat.py'
Jan 26 16:15:03 compute-0 sudo[63565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:04 compute-0 python3.9[63567]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:15:04 compute-0 sudo[63565]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:04 compute-0 sudo[63717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gachvcvuydameaurdubsfszghtvhcina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444104.6082637-168-82965302726008/AnsiballZ_command.py'
Jan 26 16:15:04 compute-0 sudo[63717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:05 compute-0 python3.9[63719]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:15:05 compute-0 sudo[63717]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:05 compute-0 sudo[63870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buzkewwuhcjcwwfvidsrfhqafbuactpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444105.3696773-178-2401677499757/AnsiballZ_service_facts.py'
Jan 26 16:15:05 compute-0 sudo[63870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:06 compute-0 python3.9[63872]: ansible-service_facts Invoked
Jan 26 16:15:06 compute-0 network[63889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:15:06 compute-0 network[63890]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:15:06 compute-0 network[63891]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:15:09 compute-0 sudo[63870]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:10 compute-0 sudo[64174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rllmqqxouhsnkgqwwitsrabydffohwna ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769444109.9510086-193-76886779636898/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769444109.9510086-193-76886779636898/args'
Jan 26 16:15:10 compute-0 sudo[64174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:10 compute-0 sudo[64174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:10 compute-0 sudo[64341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kspznedpuutvvvukcihmsrjnnitiqptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444110.6646736-204-74322601234770/AnsiballZ_dnf.py'
Jan 26 16:15:10 compute-0 sudo[64341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:11 compute-0 python3.9[64343]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:15:12 compute-0 sudo[64341]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:13 compute-0 sudo[64494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udaewrwcepehicuklcjljiuupmpiwwjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444112.9903462-217-252108419116928/AnsiballZ_package_facts.py'
Jan 26 16:15:13 compute-0 sudo[64494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:13 compute-0 python3.9[64496]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 26 16:15:14 compute-0 sudo[64494]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:14 compute-0 sudo[64646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfrciedjnxrbogdgowpdgvyrcqiwepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444114.5278451-227-262104853874068/AnsiballZ_stat.py'
Jan 26 16:15:14 compute-0 sudo[64646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:15 compute-0 python3.9[64648]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:15 compute-0 sudo[64646]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:15 compute-0 sudo[64771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuyfjqfgynzkizuxumilixaqiiyttqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444114.5278451-227-262104853874068/AnsiballZ_copy.py'
Jan 26 16:15:15 compute-0 sudo[64771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:15 compute-0 python3.9[64773]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444114.5278451-227-262104853874068/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:16 compute-0 sudo[64771]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:16 compute-0 sudo[64925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwuarwtzzrqoquqgucppphmwfynhjpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444116.2543473-242-156540411925515/AnsiballZ_stat.py'
Jan 26 16:15:16 compute-0 sudo[64925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:16 compute-0 python3.9[64927]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:16 compute-0 sudo[64925]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:17 compute-0 sudo[65050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhsxrhyvboplzwtafwosbivjmdmmjnqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444116.2543473-242-156540411925515/AnsiballZ_copy.py'
Jan 26 16:15:17 compute-0 sudo[65050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:17 compute-0 python3.9[65052]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444116.2543473-242-156540411925515/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:17 compute-0 sudo[65050]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:18 compute-0 sudo[65204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhccsrjybdrxxyxizgskxcsjmsqyjogl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444117.8169205-263-44830502192153/AnsiballZ_lineinfile.py'
Jan 26 16:15:18 compute-0 sudo[65204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:18 compute-0 python3.9[65206]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:18 compute-0 sudo[65204]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:19 compute-0 sudo[65358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssyxubmbpckdirxvnhzijdfmmkkuzfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444118.9922645-278-144741899621623/AnsiballZ_setup.py'
Jan 26 16:15:19 compute-0 sudo[65358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:19 compute-0 python3.9[65360]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:15:19 compute-0 sudo[65358]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:21 compute-0 sudo[65442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqnqlfugodtidwoqnmvnwgyvyrsgyzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444118.9922645-278-144741899621623/AnsiballZ_systemd.py'
Jan 26 16:15:21 compute-0 sudo[65442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:21 compute-0 python3.9[65444]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:21 compute-0 sudo[65442]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:22 compute-0 sudo[65596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvqeeefakkimnhllkdbajniahbviuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444122.3335428-294-224741277389314/AnsiballZ_setup.py'
Jan 26 16:15:22 compute-0 sudo[65596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:22 compute-0 python3.9[65598]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:15:23 compute-0 sudo[65596]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:23 compute-0 sudo[65680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbmmnfenhpsbfzxknbjwttfpmjvthxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444122.3335428-294-224741277389314/AnsiballZ_systemd.py'
Jan 26 16:15:23 compute-0 sudo[65680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:23 compute-0 python3.9[65682]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:15:23 compute-0 chronyd[801]: chronyd exiting
Jan 26 16:15:23 compute-0 systemd[1]: Stopping NTP client/server...
Jan 26 16:15:23 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 26 16:15:23 compute-0 systemd[1]: Stopped NTP client/server.
Jan 26 16:15:23 compute-0 systemd[1]: Starting NTP client/server...
Jan 26 16:15:24 compute-0 chronyd[65691]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 16:15:24 compute-0 chronyd[65691]: Frequency -27.122 +/- 0.124 ppm read from /var/lib/chrony/drift
Jan 26 16:15:24 compute-0 chronyd[65691]: Loaded seccomp filter (level 2)
Jan 26 16:15:24 compute-0 systemd[1]: Started NTP client/server.
Jan 26 16:15:24 compute-0 sudo[65680]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:24 compute-0 sshd-session[60834]: Connection closed by 192.168.122.30 port 50472
Jan 26 16:15:24 compute-0 sshd-session[60831]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:15:24 compute-0 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Jan 26 16:15:24 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 26 16:15:24 compute-0 systemd[1]: session-14.scope: Consumed 27.186s CPU time.
Jan 26 16:15:24 compute-0 systemd-logind[788]: Removed session 14.
Jan 26 16:15:29 compute-0 sshd-session[65717]: Accepted publickey for zuul from 192.168.122.30 port 59932 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:15:29 compute-0 systemd-logind[788]: New session 15 of user zuul.
Jan 26 16:15:29 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 26 16:15:29 compute-0 sshd-session[65717]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:15:31 compute-0 python3.9[65870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:15:32 compute-0 sudo[66024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfvowigdiemdkgaplzgfplwhnplytfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444131.5235546-28-192118501879508/AnsiballZ_file.py'
Jan 26 16:15:32 compute-0 sudo[66024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:32 compute-0 python3.9[66026]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:32 compute-0 sudo[66024]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:33 compute-0 sudo[66199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiuqyocshbyexqgpmkfaluazaschczcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444132.6920414-36-121068126210588/AnsiballZ_stat.py'
Jan 26 16:15:33 compute-0 sudo[66199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:33 compute-0 python3.9[66201]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:33 compute-0 sudo[66199]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:33 compute-0 sudo[66277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tidoeywhkyzdncblqhqbcohvknkmfhgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444132.6920414-36-121068126210588/AnsiballZ_file.py'
Jan 26 16:15:33 compute-0 sudo[66277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:34 compute-0 python3.9[66279]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8ehtqir2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:34 compute-0 sudo[66277]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:34 compute-0 sudo[66429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roumhafrgeoufazcyolzyvvwotekgupl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444134.5382552-56-207311022531894/AnsiballZ_stat.py'
Jan 26 16:15:34 compute-0 sudo[66429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:35 compute-0 python3.9[66431]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:35 compute-0 sudo[66429]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:35 compute-0 sudo[66552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxfrazzdqdoifggywwfflytjvaxiuyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444134.5382552-56-207311022531894/AnsiballZ_copy.py'
Jan 26 16:15:35 compute-0 sudo[66552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:35 compute-0 python3.9[66554]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444134.5382552-56-207311022531894/.source _original_basename=.n0b5y6kb follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:35 compute-0 sudo[66552]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:36 compute-0 sudo[66704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zddsydgeljmofcnxknxokqepgeytboic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444136.0090172-72-33479070378379/AnsiballZ_file.py'
Jan 26 16:15:36 compute-0 sudo[66704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:36 compute-0 python3.9[66706]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:15:36 compute-0 sudo[66704]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:36 compute-0 sudo[66856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tacjdqxlbizlwfxtunkydwslzelugkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444136.6705034-80-118799936067204/AnsiballZ_stat.py'
Jan 26 16:15:36 compute-0 sudo[66856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:37 compute-0 python3.9[66858]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:37 compute-0 sudo[66856]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:37 compute-0 sudo[66979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvloldhoboiifwbdtnsfayqfkixywxpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444136.6705034-80-118799936067204/AnsiballZ_copy.py'
Jan 26 16:15:37 compute-0 sudo[66979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:37 compute-0 python3.9[66981]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444136.6705034-80-118799936067204/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:15:37 compute-0 sudo[66979]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:38 compute-0 sudo[67131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvxfebmfqtpbviqbwjwuamzhqmektonb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444137.8833752-80-48219474653466/AnsiballZ_stat.py'
Jan 26 16:15:38 compute-0 sudo[67131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:38 compute-0 python3.9[67133]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:38 compute-0 sudo[67131]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:38 compute-0 sudo[67254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldipmbumuboscadnffzfwkcklchxpfdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444137.8833752-80-48219474653466/AnsiballZ_copy.py'
Jan 26 16:15:38 compute-0 sudo[67254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:38 compute-0 python3.9[67256]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444137.8833752-80-48219474653466/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:15:38 compute-0 sudo[67254]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:39 compute-0 sudo[67406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfwcjvafoisscmlpgqrekazqpvqkwihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444139.0499237-109-253149569660988/AnsiballZ_file.py'
Jan 26 16:15:39 compute-0 sudo[67406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:39 compute-0 python3.9[67408]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:39 compute-0 sudo[67406]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:39 compute-0 sudo[67558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsfkzpravdtdsfzpqtivgdqdmdqzheha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444139.7104237-117-55107506779170/AnsiballZ_stat.py'
Jan 26 16:15:39 compute-0 sudo[67558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:40 compute-0 python3.9[67560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:40 compute-0 sudo[67558]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:40 compute-0 sudo[67681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvozlfyztttjxhffujxbltwclrfojwuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444139.7104237-117-55107506779170/AnsiballZ_copy.py'
Jan 26 16:15:40 compute-0 sudo[67681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:40 compute-0 python3.9[67683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444139.7104237-117-55107506779170/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:40 compute-0 sudo[67681]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:41 compute-0 sudo[67833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opckczhfeeecwbhfqggbilbsdhhnctpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444140.92199-132-108911139380361/AnsiballZ_stat.py'
Jan 26 16:15:41 compute-0 sudo[67833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:41 compute-0 python3.9[67835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:41 compute-0 sudo[67833]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:41 compute-0 sudo[67956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrtbwsvlleqqirjdvwpfrllsqprofkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444140.92199-132-108911139380361/AnsiballZ_copy.py'
Jan 26 16:15:41 compute-0 sudo[67956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:42 compute-0 python3.9[67958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444140.92199-132-108911139380361/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:42 compute-0 sudo[67956]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:42 compute-0 sudo[68108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdsdrlbibpxzwipdemmloofltvenjstv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444142.2350092-147-242848391053868/AnsiballZ_systemd.py'
Jan 26 16:15:42 compute-0 sudo[68108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:43 compute-0 python3.9[68110]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:43 compute-0 systemd[1]: Reloading.
Jan 26 16:15:43 compute-0 systemd-rc-local-generator[68138]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:43 compute-0 systemd-sysv-generator[68142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:43 compute-0 systemd[1]: Reloading.
Jan 26 16:15:43 compute-0 systemd-rc-local-generator[68172]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:43 compute-0 systemd-sysv-generator[68176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:43 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 26 16:15:43 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 26 16:15:43 compute-0 sudo[68108]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:44 compute-0 sudo[68335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckctdhvuxgqcfpogvgqmtxidwmxqttw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444143.8816571-155-184426486092445/AnsiballZ_stat.py'
Jan 26 16:15:44 compute-0 sudo[68335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:44 compute-0 python3.9[68337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:44 compute-0 sudo[68335]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:44 compute-0 sudo[68458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzmncxmgkdgyltqfhyuiaouwskduhpwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444143.8816571-155-184426486092445/AnsiballZ_copy.py'
Jan 26 16:15:44 compute-0 sudo[68458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:44 compute-0 python3.9[68460]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444143.8816571-155-184426486092445/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:44 compute-0 sudo[68458]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:45 compute-0 sudo[68610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckcdfjhaqaaofgywwooffbqxaittvlzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444145.1095595-170-55975952641768/AnsiballZ_stat.py'
Jan 26 16:15:45 compute-0 sudo[68610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:45 compute-0 python3.9[68612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:45 compute-0 sudo[68610]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:45 compute-0 sudo[68733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-natylvtdypiqncfavphtbpsfaduakbcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444145.1095595-170-55975952641768/AnsiballZ_copy.py'
Jan 26 16:15:45 compute-0 sudo[68733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:46 compute-0 python3.9[68735]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444145.1095595-170-55975952641768/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:46 compute-0 sudo[68733]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:46 compute-0 sudo[68885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkauzynmqcnseanxhjyiijrtxotzhfba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444146.341453-185-267201063563322/AnsiballZ_systemd.py'
Jan 26 16:15:46 compute-0 sudo[68885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:46 compute-0 python3.9[68887]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:46 compute-0 systemd[1]: Reloading.
Jan 26 16:15:47 compute-0 systemd-rc-local-generator[68915]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:47 compute-0 systemd-sysv-generator[68919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:47 compute-0 systemd[1]: Reloading.
Jan 26 16:15:47 compute-0 systemd-rc-local-generator[68952]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:47 compute-0 systemd-sysv-generator[68958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:47 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 16:15:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 16:15:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 16:15:47 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 16:15:47 compute-0 sudo[68885]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:48 compute-0 python3.9[69113]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:15:48 compute-0 network[69130]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:15:48 compute-0 network[69131]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:15:48 compute-0 network[69132]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:15:51 compute-0 sudo[69392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhaqowoehcddyynyzxzptffeevkurqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444151.3782396-201-91842847448285/AnsiballZ_systemd.py'
Jan 26 16:15:51 compute-0 sudo[69392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:52 compute-0 python3.9[69394]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:52 compute-0 systemd[1]: Reloading.
Jan 26 16:15:52 compute-0 systemd-rc-local-generator[69424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:52 compute-0 systemd-sysv-generator[69427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:52 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 26 16:15:52 compute-0 iptables.init[69433]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 26 16:15:52 compute-0 iptables.init[69433]: iptables: Flushing firewall rules: [  OK  ]
Jan 26 16:15:52 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 26 16:15:52 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 26 16:15:52 compute-0 sudo[69392]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:53 compute-0 sudo[69628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvphjvlzcjujxmiazmmimnwnedydiajg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444152.870964-201-147785588422009/AnsiballZ_systemd.py'
Jan 26 16:15:53 compute-0 sudo[69628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:53 compute-0 python3.9[69630]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:53 compute-0 sudo[69628]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:54 compute-0 sudo[69782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuguvagmiwoimzpdbieeuqnyzemyakug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444153.849657-217-27720728194380/AnsiballZ_systemd.py'
Jan 26 16:15:54 compute-0 sudo[69782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:54 compute-0 python3.9[69784]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:15:54 compute-0 systemd[1]: Reloading.
Jan 26 16:15:54 compute-0 systemd-rc-local-generator[69817]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:15:54 compute-0 systemd-sysv-generator[69821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:15:54 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 26 16:15:54 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 26 16:15:54 compute-0 sudo[69782]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:55 compute-0 sudo[69976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbdctspsvbghqnmfqjccwufeyagjtrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444154.9826908-225-61962406055359/AnsiballZ_command.py'
Jan 26 16:15:55 compute-0 sudo[69976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:55 compute-0 python3.9[69978]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:15:55 compute-0 sudo[69976]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:56 compute-0 sudo[70129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqkvifyqqqvjffljkwuadgzixenypyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444156.0793114-239-116545506478282/AnsiballZ_stat.py'
Jan 26 16:15:56 compute-0 sudo[70129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:56 compute-0 python3.9[70131]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:56 compute-0 sudo[70129]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:57 compute-0 sudo[70254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcsgwiislbeulnyuyyttuonfqprijwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444156.0793114-239-116545506478282/AnsiballZ_copy.py'
Jan 26 16:15:57 compute-0 sudo[70254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:57 compute-0 python3.9[70256]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444156.0793114-239-116545506478282/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:57 compute-0 sudo[70254]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:57 compute-0 sudo[70407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmtkllmfqzrigzhhjmsbejbipfhhtcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444157.431517-254-120643297292643/AnsiballZ_systemd.py'
Jan 26 16:15:57 compute-0 sudo[70407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:58 compute-0 python3.9[70409]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:15:58 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 26 16:15:58 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 26 16:15:58 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 26 16:15:58 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 26 16:15:58 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 26 16:15:58 compute-0 sudo[70407]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:58 compute-0 sudo[70563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymtacjbhroyxqyqsipyeacjzjtfcnvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444158.3099122-262-116437350356022/AnsiballZ_file.py'
Jan 26 16:15:58 compute-0 sudo[70563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:58 compute-0 python3.9[70565]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:15:58 compute-0 sudo[70563]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:59 compute-0 sudo[70715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zytehjyygnluhtgzmdwdzaobwuerxjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444159.0030246-270-103678584191579/AnsiballZ_stat.py'
Jan 26 16:15:59 compute-0 sudo[70715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:15:59 compute-0 python3.9[70717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:15:59 compute-0 sudo[70715]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:59 compute-0 sudo[70838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajoizmdwunbwzbseuegbkduwwfefuulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444159.0030246-270-103678584191579/AnsiballZ_copy.py'
Jan 26 16:15:59 compute-0 sudo[70838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:00 compute-0 python3.9[70840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444159.0030246-270-103678584191579/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:00 compute-0 sudo[70838]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:00 compute-0 sudo[70990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjoyrznkyumeyiexcoimudivtuuspkoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444160.3523042-288-30754970817662/AnsiballZ_timezone.py'
Jan 26 16:16:00 compute-0 sudo[70990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:00 compute-0 python3.9[70992]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 16:16:01 compute-0 systemd[1]: Starting Time & Date Service...
Jan 26 16:16:01 compute-0 systemd[1]: Started Time & Date Service.
Jan 26 16:16:01 compute-0 sudo[70990]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:01 compute-0 sudo[71146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srkytzllddubksvxjhbtpdbwthqgismz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444161.3529363-297-252892433628326/AnsiballZ_file.py'
Jan 26 16:16:01 compute-0 sudo[71146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:01 compute-0 python3.9[71148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:01 compute-0 sudo[71146]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:02 compute-0 sudo[71298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbldskbgmajmvehenmoofpdcaqcmsof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444162.1108952-305-181352936489923/AnsiballZ_stat.py'
Jan 26 16:16:02 compute-0 sudo[71298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:02 compute-0 python3.9[71300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:02 compute-0 sudo[71298]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:03 compute-0 sudo[71421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhirltnpmbmirsezhamvxiyzatkxrwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444162.1108952-305-181352936489923/AnsiballZ_copy.py'
Jan 26 16:16:03 compute-0 sudo[71421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:03 compute-0 python3.9[71423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444162.1108952-305-181352936489923/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:03 compute-0 sudo[71421]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:03 compute-0 sudo[71573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whneonwlfsdxpxxgyelolegctmoxkqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444163.406519-320-107359891638669/AnsiballZ_stat.py'
Jan 26 16:16:03 compute-0 sudo[71573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:03 compute-0 python3.9[71575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:03 compute-0 sudo[71573]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:04 compute-0 sudo[71696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsnbowcvrnsnhvsmozwqpgdxakfesfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444163.406519-320-107359891638669/AnsiballZ_copy.py'
Jan 26 16:16:04 compute-0 sudo[71696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:04 compute-0 python3.9[71698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444163.406519-320-107359891638669/.source.yaml _original_basename=.59nbifs4 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:04 compute-0 sudo[71696]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:05 compute-0 sudo[71848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzoqdvmzmumnvoplpsqeiwbqvoshfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444164.9288983-335-233132791049558/AnsiballZ_stat.py'
Jan 26 16:16:05 compute-0 sudo[71848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:05 compute-0 python3.9[71850]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:05 compute-0 sudo[71848]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:05 compute-0 sudo[71971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfvgizyufhpitkayhxuxfngtaechtcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444164.9288983-335-233132791049558/AnsiballZ_copy.py'
Jan 26 16:16:05 compute-0 sudo[71971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:06 compute-0 python3.9[71973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444164.9288983-335-233132791049558/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:06 compute-0 sudo[71971]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:06 compute-0 sudo[72123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiqtqjukpjkbkchihcnjuhnomlgbburq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444166.2186377-350-83607307157154/AnsiballZ_command.py'
Jan 26 16:16:06 compute-0 sudo[72123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:06 compute-0 python3.9[72125]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:06 compute-0 sudo[72123]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:07 compute-0 sudo[72276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-assbtmflomsiqrmdvgxghnxfwzjnuufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444166.9058497-358-168585980870791/AnsiballZ_command.py'
Jan 26 16:16:07 compute-0 sudo[72276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:07 compute-0 python3.9[72278]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:07 compute-0 sudo[72276]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:07 compute-0 sudo[72429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyxvrakonnfttvoxbzhtaxykwbjjcefb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444167.5498555-366-116541530824483/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 16:16:07 compute-0 sudo[72429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:08 compute-0 python3[72431]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 16:16:08 compute-0 sudo[72429]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:08 compute-0 sudo[72581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtadfycafuykuauagvvnzyeysoooghvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444168.3888931-374-1539117284562/AnsiballZ_stat.py'
Jan 26 16:16:08 compute-0 sudo[72581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:08 compute-0 python3.9[72583]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:08 compute-0 sudo[72581]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:09 compute-0 sudo[72704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoslfifxsiogkicpkruzkalqtdjcompw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444168.3888931-374-1539117284562/AnsiballZ_copy.py'
Jan 26 16:16:09 compute-0 sudo[72704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:09 compute-0 python3.9[72706]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444168.3888931-374-1539117284562/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:09 compute-0 sudo[72704]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:10 compute-0 sudo[72856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ripuprxvtvfuczrsaqtrehcznmipmqhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444169.7516603-389-108625263934999/AnsiballZ_stat.py'
Jan 26 16:16:10 compute-0 sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:10 compute-0 python3.9[72858]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:10 compute-0 sudo[72856]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:10 compute-0 sudo[72979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bniuzqoxcvrysstwaqtqdbrsedrxrvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444169.7516603-389-108625263934999/AnsiballZ_copy.py'
Jan 26 16:16:10 compute-0 sudo[72979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:10 compute-0 python3.9[72981]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444169.7516603-389-108625263934999/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:10 compute-0 sudo[72979]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:11 compute-0 sudo[73131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbejbkyawfzbpvxmqqjuizywrnhkbthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444171.1110923-404-27971344144182/AnsiballZ_stat.py'
Jan 26 16:16:11 compute-0 sudo[73131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:11 compute-0 python3.9[73133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:11 compute-0 sudo[73131]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:12 compute-0 sudo[73254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonmaflvqlxzsqtojwxlqnlrwuvwecnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444171.1110923-404-27971344144182/AnsiballZ_copy.py'
Jan 26 16:16:12 compute-0 sudo[73254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:12 compute-0 python3.9[73256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444171.1110923-404-27971344144182/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:12 compute-0 sudo[73254]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:12 compute-0 sudo[73406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglkxjodxibwaiwebzozzanhoitznqbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444172.4799306-419-259046784749313/AnsiballZ_stat.py'
Jan 26 16:16:12 compute-0 sudo[73406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:12 compute-0 python3.9[73408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:13 compute-0 sudo[73406]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:13 compute-0 sudo[73529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nefjgeycxmmkfcwjyspjxdkxjewzoxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444172.4799306-419-259046784749313/AnsiballZ_copy.py'
Jan 26 16:16:13 compute-0 sudo[73529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:13 compute-0 python3.9[73531]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444172.4799306-419-259046784749313/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:13 compute-0 sudo[73529]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:14 compute-0 sudo[73681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oizwxzmpshtpsrdcrxfkhlzdkofcfnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444173.788646-434-130091563030609/AnsiballZ_stat.py'
Jan 26 16:16:14 compute-0 sudo[73681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:14 compute-0 python3.9[73683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:16:14 compute-0 sudo[73681]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:14 compute-0 sudo[73804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhgcqbwbthujzcnrwlcshwyhzxhcbvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444173.788646-434-130091563030609/AnsiballZ_copy.py'
Jan 26 16:16:14 compute-0 sudo[73804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:14 compute-0 python3.9[73806]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444173.788646-434-130091563030609/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:15 compute-0 sudo[73804]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:15 compute-0 sudo[73956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcsnzuyvmotvmxjcarowceyyyxtslbzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444175.1930594-449-208078207220130/AnsiballZ_file.py'
Jan 26 16:16:15 compute-0 sudo[73956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:15 compute-0 python3.9[73958]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:15 compute-0 sudo[73956]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:16 compute-0 sudo[74108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiqkvipolyopzlvbschjfspugznyitzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444176.081463-457-118806025794217/AnsiballZ_command.py'
Jan 26 16:16:16 compute-0 sudo[74108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:16 compute-0 python3.9[74110]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:16 compute-0 sudo[74108]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:17 compute-0 sudo[74267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-perhtcifyvffqdbktemqqknyadxjypag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444176.9378073-465-247262743428788/AnsiballZ_blockinfile.py'
Jan 26 16:16:17 compute-0 sudo[74267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:17 compute-0 python3.9[74269]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:17 compute-0 sudo[74267]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:18 compute-0 sudo[74420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbimyejsvnelglxuvcuipjifsehoimtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444177.9803824-474-177392819556472/AnsiballZ_file.py'
Jan 26 16:16:18 compute-0 sudo[74420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:18 compute-0 python3.9[74422]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:18 compute-0 sudo[74420]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:18 compute-0 sudo[74572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlaugpjyjhzjqgsmyrgmnqkslzzzmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444178.647825-474-214109141574067/AnsiballZ_file.py'
Jan 26 16:16:18 compute-0 sudo[74572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:19 compute-0 python3.9[74574]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:19 compute-0 sudo[74572]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:20 compute-0 sudo[74724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmmdzeqimhmzccesgqjdanhdxeekzbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444179.4407372-489-145449755288778/AnsiballZ_mount.py'
Jan 26 16:16:20 compute-0 sudo[74724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:20 compute-0 python3.9[74726]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 16:16:20 compute-0 sudo[74724]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:16:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:16:20 compute-0 sudo[74878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cprqeflilbdnnzzivvkistdzkomvqdxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444180.4790306-489-223554843065571/AnsiballZ_mount.py'
Jan 26 16:16:20 compute-0 sudo[74878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:21 compute-0 python3.9[74880]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 16:16:21 compute-0 sudo[74878]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:21 compute-0 sshd-session[65720]: Connection closed by 192.168.122.30 port 59932
Jan 26 16:16:21 compute-0 sshd-session[65717]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:16:21 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 26 16:16:21 compute-0 systemd[1]: session-15.scope: Consumed 38.095s CPU time.
Jan 26 16:16:21 compute-0 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Jan 26 16:16:21 compute-0 systemd-logind[788]: Removed session 15.
Jan 26 16:16:27 compute-0 sshd-session[74906]: Accepted publickey for zuul from 192.168.122.30 port 47034 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:16:27 compute-0 systemd-logind[788]: New session 16 of user zuul.
Jan 26 16:16:27 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 26 16:16:27 compute-0 sshd-session[74906]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:16:28 compute-0 sudo[75059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilkqorncupkwpptgrnyagogjaixkhnpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444187.8808265-16-277427238442201/AnsiballZ_tempfile.py'
Jan 26 16:16:28 compute-0 sudo[75059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:28 compute-0 python3.9[75061]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 26 16:16:28 compute-0 sudo[75059]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:29 compute-0 sudo[75211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdeohxvlczeddrkoqilpxnspqdtebuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444188.7035618-28-134626686690832/AnsiballZ_stat.py'
Jan 26 16:16:29 compute-0 sudo[75211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:29 compute-0 python3.9[75213]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:16:29 compute-0 sudo[75211]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:30 compute-0 sudo[75363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mulvqbwwtrsmskhybiehrxifvuvwmzwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444189.5487926-38-110926514158609/AnsiballZ_setup.py'
Jan 26 16:16:30 compute-0 sudo[75363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:30 compute-0 python3.9[75365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:16:30 compute-0 sudo[75363]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:31 compute-0 sudo[75515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzngowkrqvkbwotzsqxaunxxxylagxva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444190.682958-47-63336894295231/AnsiballZ_blockinfile.py'
Jan 26 16:16:31 compute-0 sudo[75515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:31 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 16:16:31 compute-0 python3.9[75517]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPK1uOq8LO9V3qKg7Pi0fg/1yiDPysL7Uf05ie9csYUgSsp2qS4Fs0xL+q1cpFKQC9r/vCqKnndiXUQA0ezCHxg1UeF0iAa/zanDS87qy+Jq9WaWqsIiu4elUrMs8kuxO8uQ0AboUX4q+yjyVHOFzHeX9ff/6VlAsTgMm3aV9pO4XFt97M5x7Bfbou92I8NuSP0go2w9k6dC3ziySnThgxSOkaNtcGfrdhZuJQChAB845mkar7Cex225mA89VqqdR7zTOokiKwHtzCF6DTCtoekXbqjhTHViwTAJTd7yMhC6S8B2CzQsqsjViAB5LlgzW5bVvt4vEEE98Wq2e5365kD03fW0c+8IaBro7IYkaIVCt1UV5xzSHh8Gfl6genGcENdEnttKBFwTlyO3GmX4VUbgz6OfBM5IHq+dC7MPfQWafp1iHEia617SuoxEK0XHadvMtnoceUgObub3znBGifrzbLex9pzp1XNCnDzilMvdo67XLwYBuJJ/u6DZ0tF1M=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICmbshx5LRZoarEOtZDXvYXWM7ApkrBw46gMu8lqvOVq
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGul3DDHz7K74VXfcnGdtvNYXWQympeYzAa9twLLMp7dHlZxh4f6A3asdPEiPNbwS4kGjlYu+RRVSxMhIYnLxxQ=
                                             create=True mode=0644 path=/tmp/ansible.drmz9uzg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:31 compute-0 sudo[75515]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:31 compute-0 sudo[75669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swnjwyjkamznolcajaqaolpkdkjslgua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444191.5160837-55-175327713982459/AnsiballZ_command.py'
Jan 26 16:16:31 compute-0 sudo[75669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:32 compute-0 python3.9[75671]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.drmz9uzg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:32 compute-0 sudo[75669]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:32 compute-0 sudo[75823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjgpegzjduhxlluqucbuwfrltcerqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444192.333608-63-61917512574909/AnsiballZ_file.py'
Jan 26 16:16:32 compute-0 sudo[75823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:32 compute-0 python3.9[75825]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.drmz9uzg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:32 compute-0 sudo[75823]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:33 compute-0 sshd-session[74909]: Connection closed by 192.168.122.30 port 47034
Jan 26 16:16:33 compute-0 sshd-session[74906]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:16:33 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 26 16:16:33 compute-0 systemd[1]: session-16.scope: Consumed 3.530s CPU time.
Jan 26 16:16:33 compute-0 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Jan 26 16:16:33 compute-0 systemd-logind[788]: Removed session 16.
Jan 26 16:16:38 compute-0 sshd-session[75850]: Accepted publickey for zuul from 192.168.122.30 port 45940 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:16:38 compute-0 systemd-logind[788]: New session 17 of user zuul.
Jan 26 16:16:38 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 26 16:16:38 compute-0 sshd-session[75850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:16:39 compute-0 python3.9[76003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:16:40 compute-0 sudo[76157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqmfihgrnwpzzlniseziiyzveteqoufq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444200.2852788-27-255285995842724/AnsiballZ_systemd.py'
Jan 26 16:16:40 compute-0 sudo[76157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:41 compute-0 python3.9[76159]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 16:16:41 compute-0 sudo[76157]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:41 compute-0 sudo[76311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjwztiygixfjzvwptjrzurwkpapsgkpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444201.4079-35-199093664168621/AnsiballZ_systemd.py'
Jan 26 16:16:41 compute-0 sudo[76311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:41 compute-0 python3.9[76313]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:16:42 compute-0 sudo[76311]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:42 compute-0 sudo[76464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zevriohcunlpyknoafwbsnjpajdvujnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444202.2437778-44-124766385344505/AnsiballZ_command.py'
Jan 26 16:16:42 compute-0 sudo[76464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:42 compute-0 python3.9[76466]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:42 compute-0 sudo[76464]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:43 compute-0 sudo[76617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afellexevxytwnzqwatfjzxlfnomdwll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444203.1144848-52-207930940372833/AnsiballZ_stat.py'
Jan 26 16:16:43 compute-0 sudo[76617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:43 compute-0 python3.9[76619]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:16:43 compute-0 sudo[76617]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:44 compute-0 sudo[76771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utounipeslfjhtzafggtbsivrxewqvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444203.8944662-60-174789247004737/AnsiballZ_command.py'
Jan 26 16:16:44 compute-0 sudo[76771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:44 compute-0 python3.9[76773]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:44 compute-0 sudo[76771]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:45 compute-0 sudo[76926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymmwifjfycnkkojlqiebxoolbubnzcvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444204.6196666-68-235483904376623/AnsiballZ_file.py'
Jan 26 16:16:45 compute-0 sudo[76926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:45 compute-0 python3.9[76928]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:16:45 compute-0 sudo[76926]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:45 compute-0 sshd-session[75853]: Connection closed by 192.168.122.30 port 45940
Jan 26 16:16:45 compute-0 sshd-session[75850]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:16:45 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 26 16:16:45 compute-0 systemd[1]: session-17.scope: Consumed 4.708s CPU time.
Jan 26 16:16:45 compute-0 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Jan 26 16:16:45 compute-0 systemd-logind[788]: Removed session 17.
Jan 26 16:16:51 compute-0 sshd-session[76953]: Accepted publickey for zuul from 192.168.122.30 port 54214 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:16:51 compute-0 systemd-logind[788]: New session 18 of user zuul.
Jan 26 16:16:51 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 26 16:16:51 compute-0 sshd-session[76953]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:16:52 compute-0 python3.9[77106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:16:53 compute-0 sudo[77260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trluhimvgjlgftlwgzklzxlbixskrphl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444213.0396764-29-194833742260633/AnsiballZ_setup.py'
Jan 26 16:16:53 compute-0 sudo[77260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:53 compute-0 python3.9[77262]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:16:53 compute-0 sudo[77260]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:54 compute-0 sudo[77344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjrvrppfttwwapuctdmvdrbtzuiruda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444213.0396764-29-194833742260633/AnsiballZ_dnf.py'
Jan 26 16:16:54 compute-0 sudo[77344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:16:54 compute-0 python3.9[77346]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 16:16:56 compute-0 sudo[77344]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:56 compute-0 python3.9[77497]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:16:58 compute-0 python3.9[77648]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:16:59 compute-0 python3.9[77798]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:16:59 compute-0 python3.9[77948]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:17:00 compute-0 sshd-session[76956]: Connection closed by 192.168.122.30 port 54214
Jan 26 16:17:00 compute-0 sshd-session[76953]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:17:00 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 26 16:17:00 compute-0 systemd[1]: session-18.scope: Consumed 6.416s CPU time.
Jan 26 16:17:00 compute-0 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Jan 26 16:17:00 compute-0 systemd-logind[788]: Removed session 18.
Jan 26 16:17:07 compute-0 sshd-session[77973]: Accepted publickey for zuul from 192.168.122.30 port 38874 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:17:07 compute-0 systemd-logind[788]: New session 19 of user zuul.
Jan 26 16:17:07 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 26 16:17:07 compute-0 sshd-session[77973]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:17:08 compute-0 python3.9[78126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:17:09 compute-0 sudo[78280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdwotbiejryxpcbmatwhgnpiezmzbgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444229.2962172-45-256051967338014/AnsiballZ_file.py'
Jan 26 16:17:09 compute-0 sudo[78280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:09 compute-0 python3.9[78282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:09 compute-0 sudo[78280]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:10 compute-0 sudo[78432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjknfhsatcpogobaxgbirdwcqmrkrcok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444230.0609686-45-155894636893495/AnsiballZ_file.py'
Jan 26 16:17:10 compute-0 sudo[78432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:10 compute-0 python3.9[78434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:10 compute-0 sudo[78432]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:11 compute-0 sudo[78584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjdqgzqxpeqlwggqsqshjlvvyyxszqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444230.87451-60-18588001542499/AnsiballZ_stat.py'
Jan 26 16:17:11 compute-0 sudo[78584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:11 compute-0 python3.9[78586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:11 compute-0 sudo[78584]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:12 compute-0 sudo[78707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktzahanorzvuoknomhtvrdulpbhbbrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444230.87451-60-18588001542499/AnsiballZ_copy.py'
Jan 26 16:17:12 compute-0 sudo[78707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:12 compute-0 python3.9[78709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444230.87451-60-18588001542499/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5de5f60c926ce1d5dc3ceee0b089a4b206a54033 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:12 compute-0 sudo[78707]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:12 compute-0 sudo[78859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehftrikorvpbepffpvnbumnplbmjjdze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444232.4494185-60-88962882153517/AnsiballZ_stat.py'
Jan 26 16:17:12 compute-0 sudo[78859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:12 compute-0 python3.9[78861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:12 compute-0 sudo[78859]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:13 compute-0 sudo[78982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpominjvcfdrbziukvsxnbvofvvttdvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444232.4494185-60-88962882153517/AnsiballZ_copy.py'
Jan 26 16:17:13 compute-0 sudo[78982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:13 compute-0 python3.9[78984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444232.4494185-60-88962882153517/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e76c4629032c06afb9012045d2551556139b611a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:13 compute-0 sudo[78982]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:14 compute-0 sudo[79134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purokfwbpgluwexmoykbdpkisstseraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444233.705117-60-32252177138443/AnsiballZ_stat.py'
Jan 26 16:17:14 compute-0 sudo[79134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:14 compute-0 python3.9[79136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:14 compute-0 sudo[79134]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:14 compute-0 sudo[79257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfookfwvwxbjquycshmwjvgbuqsqzxva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444233.705117-60-32252177138443/AnsiballZ_copy.py'
Jan 26 16:17:14 compute-0 sudo[79257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:15 compute-0 python3.9[79259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444233.705117-60-32252177138443/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9d9e9f6663af72b8b94746effb0ac4a3db907e86 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:15 compute-0 sudo[79257]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:15 compute-0 sudo[79409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kztnogwigrsskhetjcgihmiyucgpjima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444235.4032683-104-243109125645636/AnsiballZ_file.py'
Jan 26 16:17:15 compute-0 sudo[79409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:15 compute-0 python3.9[79411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:15 compute-0 sudo[79409]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:16 compute-0 sudo[79561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbxxsisfajxatvxklukqrkyjqacwilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444236.08537-104-85188833688657/AnsiballZ_file.py'
Jan 26 16:17:16 compute-0 sudo[79561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:16 compute-0 python3.9[79563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:16 compute-0 sudo[79561]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:17 compute-0 sudo[79713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okvgfphinqfsrbjhosbcipvlipixotoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444236.783983-119-70806136177317/AnsiballZ_stat.py'
Jan 26 16:17:17 compute-0 sudo[79713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:17 compute-0 python3.9[79715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:17 compute-0 sudo[79713]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:17 compute-0 sudo[79836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftewnmnrcsnkfdtqstjxqchnoaxrhpom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444236.783983-119-70806136177317/AnsiballZ_copy.py'
Jan 26 16:17:17 compute-0 sudo[79836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:17 compute-0 python3.9[79838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444236.783983-119-70806136177317/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=645da29c5983d8215972d3bf2d6074ca2da9502f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:17 compute-0 sudo[79836]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:18 compute-0 sudo[79988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqdyyiqhvnpmetnlrwafkmyyxkqlnktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444238.0081685-119-84430684006878/AnsiballZ_stat.py'
Jan 26 16:17:18 compute-0 sudo[79988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:18 compute-0 python3.9[79990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:18 compute-0 sudo[79988]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:18 compute-0 sudo[80111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hixzbfuztimjuobhomzplcsudstwqusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444238.0081685-119-84430684006878/AnsiballZ_copy.py'
Jan 26 16:17:18 compute-0 sudo[80111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:19 compute-0 python3.9[80113]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444238.0081685-119-84430684006878/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e76c4629032c06afb9012045d2551556139b611a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:19 compute-0 sudo[80111]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:19 compute-0 sudo[80263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cypqfzygmgfaszrogkvnwxluuzcalwek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444239.2306352-119-182299948818252/AnsiballZ_stat.py'
Jan 26 16:17:19 compute-0 sudo[80263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:19 compute-0 python3.9[80265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:19 compute-0 sudo[80263]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:20 compute-0 sudo[80386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apubagupxolsfukftiyzyowgavvprwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444239.2306352-119-182299948818252/AnsiballZ_copy.py'
Jan 26 16:17:20 compute-0 sudo[80386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:20 compute-0 python3.9[80388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444239.2306352-119-182299948818252/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8aa8d80df54d030f25b0dc6c30b02b323c25e2cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:20 compute-0 sudo[80386]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:20 compute-0 sudo[80538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgpwjaybvcfclytylwptcttpahfhkiuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444240.4723535-163-23698543370006/AnsiballZ_file.py'
Jan 26 16:17:20 compute-0 sudo[80538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:20 compute-0 python3.9[80540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:20 compute-0 sudo[80538]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:21 compute-0 sudo[80690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcreztyqhodyiusebxkonuvaivlxipck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444241.0786264-163-31763628518178/AnsiballZ_file.py'
Jan 26 16:17:21 compute-0 sudo[80690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:21 compute-0 python3.9[80692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:21 compute-0 sudo[80690]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:22 compute-0 sudo[80842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zusczxciqhvcraggtwzdmweunktirzbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444241.737099-178-216849383083996/AnsiballZ_stat.py'
Jan 26 16:17:22 compute-0 sudo[80842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:22 compute-0 python3.9[80844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:22 compute-0 sudo[80842]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:22 compute-0 sudo[80965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlkeprurzxuadzzdiqpeludfulabgzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444241.737099-178-216849383083996/AnsiballZ_copy.py'
Jan 26 16:17:22 compute-0 sudo[80965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:22 compute-0 python3.9[80967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444241.737099-178-216849383083996/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9442340f877c31cb8040b74e54dd3fd2bd22e76c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:22 compute-0 sudo[80965]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:23 compute-0 sudo[81117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnouioqztwreerrvpivofosbhjfygbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444242.8801162-178-266177938661686/AnsiballZ_stat.py'
Jan 26 16:17:23 compute-0 sudo[81117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:23 compute-0 python3.9[81119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:23 compute-0 sudo[81117]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:23 compute-0 sudo[81240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siidxsyxxzcwuwuqviwuflbyxubjmucn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444242.8801162-178-266177938661686/AnsiballZ_copy.py'
Jan 26 16:17:23 compute-0 sudo[81240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:24 compute-0 python3.9[81242]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444242.8801162-178-266177938661686/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e1147330d840c58c8fe721ee010060075b5d93f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:24 compute-0 sudo[81240]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:24 compute-0 sudo[81392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzkgfrtndgdczupncwysayplvxznvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444244.3613586-178-145866812609215/AnsiballZ_stat.py'
Jan 26 16:17:24 compute-0 sudo[81392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:24 compute-0 python3.9[81394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:24 compute-0 sudo[81392]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:25 compute-0 sudo[81515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaoqoiohewdmkoijxlurxgjckigspckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444244.3613586-178-145866812609215/AnsiballZ_copy.py'
Jan 26 16:17:25 compute-0 sudo[81515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:25 compute-0 python3.9[81517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444244.3613586-178-145866812609215/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=74fb21b1ce56653396fc29cd30b13b571c1409bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:25 compute-0 sudo[81515]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:25 compute-0 sudo[81667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicmjiamglmybrxikesgglvbhzxfqsbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444245.690412-222-246441852644499/AnsiballZ_file.py'
Jan 26 16:17:25 compute-0 sudo[81667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:26 compute-0 python3.9[81669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:26 compute-0 sudo[81667]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:26 compute-0 sudo[81819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yftztzbshcacrnkiyaczuloigqrbgqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444246.3125489-222-131064529661333/AnsiballZ_file.py'
Jan 26 16:17:26 compute-0 sudo[81819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:26 compute-0 python3.9[81821]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:26 compute-0 sudo[81819]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:27 compute-0 sudo[81971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsydhnpcowghgdxwsblqqiygpvvohxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444247.0009675-237-262852412580012/AnsiballZ_stat.py'
Jan 26 16:17:27 compute-0 sudo[81971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:27 compute-0 python3.9[81973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:27 compute-0 sudo[81971]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:27 compute-0 sudo[82094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkjtjzfuwtikfilqtqqkzqxkggdcfsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444247.0009675-237-262852412580012/AnsiballZ_copy.py'
Jan 26 16:17:27 compute-0 sudo[82094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:28 compute-0 python3.9[82096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444247.0009675-237-262852412580012/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5dc62bf2566c75796279069fc0f1e50a909e67aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:28 compute-0 sudo[82094]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:28 compute-0 sudo[82246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doowhizwfxvqefyykshglgcmnzevvqmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444248.2676015-237-130942947823420/AnsiballZ_stat.py'
Jan 26 16:17:28 compute-0 sudo[82246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:28 compute-0 python3.9[82248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:28 compute-0 sudo[82246]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:29 compute-0 sudo[82369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrmxigmlwfsvlkxwtgvhmhkencizteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444248.2676015-237-130942947823420/AnsiballZ_copy.py'
Jan 26 16:17:29 compute-0 sudo[82369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:29 compute-0 python3.9[82371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444248.2676015-237-130942947823420/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=64d202f5957720920512cb356f2bbb9eede313ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:29 compute-0 sudo[82369]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:29 compute-0 sudo[82521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwropljusjctebihsvdlnyyxqwgrvouj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444249.4011345-237-159353522879636/AnsiballZ_stat.py'
Jan 26 16:17:29 compute-0 sudo[82521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:29 compute-0 python3.9[82523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:29 compute-0 sudo[82521]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:30 compute-0 sudo[82644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritwfqtxkefewmzdnwmvggwydticqbcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444249.4011345-237-159353522879636/AnsiballZ_copy.py'
Jan 26 16:17:30 compute-0 sudo[82644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:30 compute-0 python3.9[82646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444249.4011345-237-159353522879636/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=19aa8929a720ccb8604a27be3f8aeae7acbdbd44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:30 compute-0 sudo[82644]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:31 compute-0 sudo[82796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjxequoxveoholgknjilahovlxccfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444250.7428498-281-108662914039124/AnsiballZ_file.py'
Jan 26 16:17:31 compute-0 sudo[82796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:31 compute-0 python3.9[82798]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:31 compute-0 sudo[82796]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:31 compute-0 sudo[82948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rujhlqlawwnvumnxnpwvfbegdezisaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444251.3907938-281-251231280154077/AnsiballZ_file.py'
Jan 26 16:17:31 compute-0 sudo[82948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:31 compute-0 python3.9[82950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:31 compute-0 sudo[82948]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:32 compute-0 sudo[83100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mskrdnrrfgeptdianswhksdstjecqhbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444252.0650465-296-220497277859248/AnsiballZ_stat.py'
Jan 26 16:17:32 compute-0 sudo[83100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:32 compute-0 python3.9[83102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:32 compute-0 sudo[83100]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:33 compute-0 chronyd[65691]: Selected source 162.159.200.123 (pool.ntp.org)
Jan 26 16:17:33 compute-0 sudo[83223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxzwvntxssdjkapgikblgtbihogmpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444252.0650465-296-220497277859248/AnsiballZ_copy.py'
Jan 26 16:17:33 compute-0 sudo[83223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:33 compute-0 python3.9[83225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444252.0650465-296-220497277859248/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f1e49e08569b6d35c4d71e575daf92332f3f0204 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:33 compute-0 sudo[83223]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:34 compute-0 sudo[83375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkayoltogpbaephiywijziicfgspjsxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444253.8100297-296-139697835356398/AnsiballZ_stat.py'
Jan 26 16:17:34 compute-0 sudo[83375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:34 compute-0 python3.9[83377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:34 compute-0 sudo[83375]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:34 compute-0 sudo[83498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffbdmclhjpaxxuqwtqrtsqidgnwwiyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444253.8100297-296-139697835356398/AnsiballZ_copy.py'
Jan 26 16:17:34 compute-0 sudo[83498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:35 compute-0 python3.9[83500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444253.8100297-296-139697835356398/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e1147330d840c58c8fe721ee010060075b5d93f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:35 compute-0 sudo[83498]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:35 compute-0 sudo[83650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkbshcklibuhyvaakvjnksyfapbldvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444255.3421082-296-151180744598104/AnsiballZ_stat.py'
Jan 26 16:17:35 compute-0 sudo[83650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:35 compute-0 python3.9[83652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:35 compute-0 sudo[83650]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:36 compute-0 sudo[83773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrezomvfghrfuhjfokpzsgmhmscssouu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444255.3421082-296-151180744598104/AnsiballZ_copy.py'
Jan 26 16:17:36 compute-0 sudo[83773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:36 compute-0 python3.9[83775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444255.3421082-296-151180744598104/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f1bbd9b1d10bfca06f5ae620feb600a7d7c1829f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:36 compute-0 sudo[83773]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:37 compute-0 sudo[83925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktldtmgllulvmfqdmnudzkfdjijoafek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444257.3910246-356-171854717896855/AnsiballZ_file.py'
Jan 26 16:17:37 compute-0 sudo[83925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:37 compute-0 python3.9[83927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:37 compute-0 sudo[83925]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:38 compute-0 sudo[84077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaukimjjzvkhvluspsybbadllyhnamif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444258.1839006-364-165393084623291/AnsiballZ_stat.py'
Jan 26 16:17:38 compute-0 sudo[84077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:38 compute-0 python3.9[84079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:38 compute-0 sudo[84077]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:39 compute-0 sudo[84200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szaqxtriknjggpdrpprxdvllttlwyuev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444258.1839006-364-165393084623291/AnsiballZ_copy.py'
Jan 26 16:17:39 compute-0 sudo[84200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:39 compute-0 python3.9[84202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444258.1839006-364-165393084623291/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:39 compute-0 sudo[84200]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:39 compute-0 sudo[84352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyslhpmbysogcitsxawcrhzexnajdltl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444259.5562832-380-231470240077260/AnsiballZ_file.py'
Jan 26 16:17:39 compute-0 sudo[84352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:40 compute-0 python3.9[84354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:40 compute-0 sudo[84352]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:41 compute-0 sudo[84504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsqfbekstoirognqnqfkrhokyeuwqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444260.2712028-388-192085627905573/AnsiballZ_stat.py'
Jan 26 16:17:41 compute-0 sudo[84504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:41 compute-0 python3.9[84506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:41 compute-0 sudo[84504]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:41 compute-0 sudo[84627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnrytujflzlchyjupuplaximkpnnyyub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444260.2712028-388-192085627905573/AnsiballZ_copy.py'
Jan 26 16:17:41 compute-0 sudo[84627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:41 compute-0 python3.9[84629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444260.2712028-388-192085627905573/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:41 compute-0 sudo[84627]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:42 compute-0 sudo[84779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovoznisutksnptwamqiujcruzqnmpjmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444262.1103723-404-1972629079783/AnsiballZ_file.py'
Jan 26 16:17:42 compute-0 sudo[84779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:42 compute-0 python3.9[84781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:42 compute-0 sudo[84779]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:43 compute-0 sudo[84931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaftxnlfuurfpojlgavtrwxyczqtsbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444263.6948013-412-213753018473035/AnsiballZ_stat.py'
Jan 26 16:17:43 compute-0 sudo[84931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:44 compute-0 python3.9[84933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:44 compute-0 sudo[84931]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:44 compute-0 sudo[85054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvgkvkdtmkvgifaomrvqezufgufsloyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444263.6948013-412-213753018473035/AnsiballZ_copy.py'
Jan 26 16:17:44 compute-0 sudo[85054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:44 compute-0 python3.9[85056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444263.6948013-412-213753018473035/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:44 compute-0 sudo[85054]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:45 compute-0 sudo[85206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcshrtdtotpqixnfqkkalesvquupadlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444264.9730399-428-44013656547816/AnsiballZ_file.py'
Jan 26 16:17:45 compute-0 sudo[85206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:45 compute-0 python3.9[85208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:45 compute-0 sudo[85206]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:45 compute-0 sudo[85358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkalnaoddmglbncbdupnttqnvhzospl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444265.6185298-436-877449430826/AnsiballZ_stat.py'
Jan 26 16:17:45 compute-0 sudo[85358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:46 compute-0 python3.9[85360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:46 compute-0 sudo[85358]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:46 compute-0 sudo[85481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiqxtjwqvorxgwrmxrtajqvxjjjlacxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444265.6185298-436-877449430826/AnsiballZ_copy.py'
Jan 26 16:17:46 compute-0 sudo[85481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:46 compute-0 python3.9[85483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444265.6185298-436-877449430826/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:46 compute-0 sudo[85481]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:47 compute-0 sudo[85633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syubbxvnuycgnsuwvlirenhssukkgicd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444266.7922142-452-102411808686792/AnsiballZ_file.py'
Jan 26 16:17:47 compute-0 sudo[85633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:47 compute-0 python3.9[85635]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:47 compute-0 sudo[85633]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:47 compute-0 sudo[85785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmchsngclbxbkuhowaqnbzlapiiwhpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444267.5154524-460-176467282455361/AnsiballZ_stat.py'
Jan 26 16:17:47 compute-0 sudo[85785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:47 compute-0 python3.9[85787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:48 compute-0 sudo[85785]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:48 compute-0 sudo[85908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknjsmgivlrtgsdyzbzynybrckgqfqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444267.5154524-460-176467282455361/AnsiballZ_copy.py'
Jan 26 16:17:48 compute-0 sudo[85908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:48 compute-0 python3.9[85910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444267.5154524-460-176467282455361/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:49 compute-0 sudo[85908]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:49 compute-0 sudo[86060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiczcfwymkivuibysqelxxnhzousyubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444269.2270281-476-191059618645830/AnsiballZ_file.py'
Jan 26 16:17:49 compute-0 sudo[86060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:49 compute-0 python3.9[86062]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:49 compute-0 sudo[86060]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:50 compute-0 sudo[86212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakegddldeucqeqeeaszammgkzufnyss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444270.0286942-484-110841249071687/AnsiballZ_stat.py'
Jan 26 16:17:50 compute-0 sudo[86212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:50 compute-0 python3.9[86214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:50 compute-0 sudo[86212]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:50 compute-0 sudo[86335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veljhrgtnlokiirribaxvwlgkawjhulo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444270.0286942-484-110841249071687/AnsiballZ_copy.py'
Jan 26 16:17:50 compute-0 sudo[86335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:51 compute-0 python3.9[86337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444270.0286942-484-110841249071687/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:51 compute-0 sudo[86335]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:51 compute-0 sudo[86487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eldvctfdwynwbczmrypvfdvyvvzrktup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444271.2267032-500-187029826045245/AnsiballZ_file.py'
Jan 26 16:17:51 compute-0 sudo[86487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:51 compute-0 python3.9[86489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:51 compute-0 sudo[86487]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:52 compute-0 sudo[86639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdobcrzbcjnzqzyyjjvfkcnqtngxwcdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444271.880403-508-189136008771099/AnsiballZ_stat.py'
Jan 26 16:17:52 compute-0 sudo[86639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:52 compute-0 python3.9[86641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:52 compute-0 sudo[86639]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:52 compute-0 sudo[86762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbilotdffgilmpygenswcrzsssbkpnaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444271.880403-508-189136008771099/AnsiballZ_copy.py'
Jan 26 16:17:52 compute-0 sudo[86762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:53 compute-0 python3.9[86764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444271.880403-508-189136008771099/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:53 compute-0 sudo[86762]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:53 compute-0 sudo[86914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbuhhhatxhwdhllkisgejzsmhisytuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444273.3745549-524-55215247671476/AnsiballZ_file.py'
Jan 26 16:17:53 compute-0 sudo[86914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:53 compute-0 python3.9[86916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:17:53 compute-0 sudo[86914]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:54 compute-0 sudo[87066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmyhpdfxgjwopwmbllhaeqageoizqol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444274.019046-532-135263748767185/AnsiballZ_stat.py'
Jan 26 16:17:54 compute-0 sudo[87066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:54 compute-0 python3.9[87068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:17:54 compute-0 sudo[87066]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:54 compute-0 sudo[87189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kteisqwnhelhnwcokpuewlqunkbefwlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444274.019046-532-135263748767185/AnsiballZ_copy.py'
Jan 26 16:17:54 compute-0 sudo[87189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:17:55 compute-0 python3.9[87191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444274.019046-532-135263748767185/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48027188bf350c9fc6c8da30ecdf77ef40b80f2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:17:55 compute-0 sudo[87189]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:55 compute-0 sshd-session[77976]: Connection closed by 192.168.122.30 port 38874
Jan 26 16:17:55 compute-0 sshd-session[77973]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:17:55 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 26 16:17:55 compute-0 systemd[1]: session-19.scope: Consumed 35.162s CPU time.
Jan 26 16:17:55 compute-0 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Jan 26 16:17:55 compute-0 systemd-logind[788]: Removed session 19.
Jan 26 16:18:01 compute-0 sshd-session[87216]: Accepted publickey for zuul from 192.168.122.30 port 58160 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:18:01 compute-0 systemd-logind[788]: New session 20 of user zuul.
Jan 26 16:18:01 compute-0 systemd[1]: Started Session 20 of User zuul.
Jan 26 16:18:01 compute-0 sshd-session[87216]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:18:02 compute-0 python3.9[87369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:18:03 compute-0 sudo[87523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaggjcvwcapfhlyztrlqxhljdkssjogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444283.2677298-29-198705127063376/AnsiballZ_file.py'
Jan 26 16:18:03 compute-0 sudo[87523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:04 compute-0 python3.9[87525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:04 compute-0 sudo[87523]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:04 compute-0 sudo[87675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtotkupanekiiwimopodiiqdtigexvvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444284.325346-29-179328697306300/AnsiballZ_file.py'
Jan 26 16:18:04 compute-0 sudo[87675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:04 compute-0 python3.9[87677]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:04 compute-0 sudo[87675]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:05 compute-0 python3.9[87827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:18:06 compute-0 sudo[87977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lngrhrlggsugrncmdyuentdxxvpuptqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444285.7759712-52-142194715461290/AnsiballZ_seboolean.py'
Jan 26 16:18:06 compute-0 sudo[87977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:06 compute-0 python3.9[87979]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 16:18:07 compute-0 sudo[87977]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:08 compute-0 sudo[88133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amlinwjxampoteadfkxriyqfqnblguzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444288.1079526-62-201894668239071/AnsiballZ_setup.py'
Jan 26 16:18:08 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 26 16:18:08 compute-0 sudo[88133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:08 compute-0 python3.9[88135]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:18:08 compute-0 sudo[88133]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:09 compute-0 sudo[88217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmfdivaslmfbyxxazedxzspdsqoxxofw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444288.1079526-62-201894668239071/AnsiballZ_dnf.py'
Jan 26 16:18:09 compute-0 sudo[88217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:09 compute-0 python3.9[88219]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:18:11 compute-0 sudo[88217]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:11 compute-0 sudo[88370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korqwlqltqrbzrtdwmlnzwrijinzpfxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444291.2614877-74-127113050119546/AnsiballZ_systemd.py'
Jan 26 16:18:11 compute-0 sudo[88370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:12 compute-0 python3.9[88372]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:18:12 compute-0 sudo[88370]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:12 compute-0 sudo[88525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdphprvakbauefavotdjxhsfqumahnmy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444292.4283705-82-233725299550994/AnsiballZ_edpm_nftables_snippet.py'
Jan 26 16:18:12 compute-0 sudo[88525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:13 compute-0 python3[88527]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 26 16:18:13 compute-0 sudo[88525]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:13 compute-0 sudo[88677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtcmqmebidsrsoxmqrkytjnthlbxacrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444293.2720912-91-225996685998278/AnsiballZ_file.py'
Jan 26 16:18:13 compute-0 sudo[88677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:13 compute-0 python3.9[88679]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:13 compute-0 sudo[88677]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:14 compute-0 sudo[88829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkcnjhktxytbaowppyreclmxqyrtxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444293.9259887-99-182749226223138/AnsiballZ_stat.py'
Jan 26 16:18:14 compute-0 sudo[88829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:14 compute-0 python3.9[88831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:14 compute-0 sudo[88829]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:15 compute-0 sudo[88907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apjxehjymozzfhvstkgjdwlvybejqocc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444293.9259887-99-182749226223138/AnsiballZ_file.py'
Jan 26 16:18:15 compute-0 sudo[88907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:15 compute-0 python3.9[88909]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:15 compute-0 sudo[88907]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:15 compute-0 sudo[89059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxhlxlkobhuflhglqymfprmifwlhocxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444295.455027-111-21477935027086/AnsiballZ_stat.py'
Jan 26 16:18:15 compute-0 sudo[89059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:15 compute-0 python3.9[89061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:15 compute-0 sudo[89059]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:16 compute-0 sudo[89137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swubthavjsdjvexqkoaywaczdbstnmun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444295.455027-111-21477935027086/AnsiballZ_file.py'
Jan 26 16:18:16 compute-0 sudo[89137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:16 compute-0 python3.9[89139]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.il2vm6rn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:16 compute-0 sudo[89137]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:16 compute-0 sudo[89289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbkeoqbnqqbjnsafmnqvoenotuhrpggz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444296.677997-123-277439646775927/AnsiballZ_stat.py'
Jan 26 16:18:16 compute-0 sudo[89289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:17 compute-0 python3.9[89291]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:17 compute-0 sudo[89289]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:17 compute-0 sudo[89367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psablaklcjevhgdsvrawzgnamsenhlpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444296.677997-123-277439646775927/AnsiballZ_file.py'
Jan 26 16:18:17 compute-0 sudo[89367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:17 compute-0 python3.9[89369]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:17 compute-0 sudo[89367]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:18 compute-0 sudo[89519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfnhvgwwdngiylfaqruizpxaqzdwrxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444297.7252448-136-33321681584084/AnsiballZ_command.py'
Jan 26 16:18:18 compute-0 sudo[89519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:18 compute-0 python3.9[89521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:18 compute-0 sudo[89519]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:19 compute-0 sudo[89672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qldgmhshsqmnmxujinaxyrjxkourwoxh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444299.009783-144-185663118778554/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 16:18:19 compute-0 sudo[89672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:19 compute-0 python3[89674]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 16:18:19 compute-0 sudo[89672]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:20 compute-0 sudo[89824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgceycszfxtihllkiplszwsjupvgtxbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444299.9943805-152-229916996058748/AnsiballZ_stat.py'
Jan 26 16:18:20 compute-0 sudo[89824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:20 compute-0 python3.9[89826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:20 compute-0 sudo[89824]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:21 compute-0 sudo[89949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuhoctanejdlvmueywgtaqjbrhzsquqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444299.9943805-152-229916996058748/AnsiballZ_copy.py'
Jan 26 16:18:21 compute-0 sudo[89949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:21 compute-0 python3.9[89951]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444299.9943805-152-229916996058748/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:21 compute-0 sudo[89949]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:22 compute-0 sudo[90101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xezcftfdgepesbxvnvkhgqoyvbcteugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444301.4603488-167-128052909932579/AnsiballZ_stat.py'
Jan 26 16:18:22 compute-0 sudo[90101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:22 compute-0 python3.9[90103]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:22 compute-0 sudo[90101]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:22 compute-0 sudo[90226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpfupwzqeuhaaqbpkzlziluukfdtiwft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444301.4603488-167-128052909932579/AnsiballZ_copy.py'
Jan 26 16:18:22 compute-0 sudo[90226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:22 compute-0 python3.9[90228]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444301.4603488-167-128052909932579/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:22 compute-0 sudo[90226]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:23 compute-0 sudo[90378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmdxqgztzfelzcplazxwdifapbhlojbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444302.972504-182-176137931776669/AnsiballZ_stat.py'
Jan 26 16:18:23 compute-0 sudo[90378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:23 compute-0 python3.9[90380]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:23 compute-0 sudo[90378]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:23 compute-0 sudo[90503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izubglvrcjejupmyfavdxjrrkloejekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444302.972504-182-176137931776669/AnsiballZ_copy.py'
Jan 26 16:18:23 compute-0 sudo[90503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:24 compute-0 python3.9[90505]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444302.972504-182-176137931776669/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:24 compute-0 sudo[90503]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:24 compute-0 sudo[90655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utyiogpiwyuoxlbulrxryibkdbolimob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444304.172404-197-16418850888460/AnsiballZ_stat.py'
Jan 26 16:18:24 compute-0 sudo[90655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:24 compute-0 python3.9[90657]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:24 compute-0 sudo[90655]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:24 compute-0 sudo[90780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtktnekrfltfvadtkngsqwedhfsxmdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444304.172404-197-16418850888460/AnsiballZ_copy.py'
Jan 26 16:18:24 compute-0 sudo[90780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:25 compute-0 python3.9[90782]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444304.172404-197-16418850888460/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:25 compute-0 sudo[90780]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:25 compute-0 sudo[90932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavfqslwldrnptxzazdgdnhtutlcbkqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444305.3050332-212-96977165761104/AnsiballZ_stat.py'
Jan 26 16:18:25 compute-0 sudo[90932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:25 compute-0 python3.9[90934]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:25 compute-0 sudo[90932]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:26 compute-0 sudo[91057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjfgwggbdgbqccglmvuhunlokmhcyrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444305.3050332-212-96977165761104/AnsiballZ_copy.py'
Jan 26 16:18:26 compute-0 sudo[91057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:26 compute-0 python3.9[91059]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444305.3050332-212-96977165761104/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:26 compute-0 sudo[91057]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:27 compute-0 sudo[91209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftqjuqyrriaafmtdfxgnpokxtewjylg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444306.582149-227-229735437969803/AnsiballZ_file.py'
Jan 26 16:18:27 compute-0 sudo[91209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:27 compute-0 python3.9[91211]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:27 compute-0 sudo[91209]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:27 compute-0 sudo[91361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnsszjdpbwlgcwargqdjksdzyajovxiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444307.4413795-235-141502864001578/AnsiballZ_command.py'
Jan 26 16:18:27 compute-0 sudo[91361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:27 compute-0 python3.9[91363]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:27 compute-0 sudo[91361]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:28 compute-0 sudo[91516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyeczersxuiohmdleliwoczdubntxlbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444308.1486692-243-243201836293496/AnsiballZ_blockinfile.py'
Jan 26 16:18:28 compute-0 sudo[91516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:28 compute-0 python3.9[91518]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:28 compute-0 sudo[91516]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:29 compute-0 sudo[91668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhncnqkpffcisjhrpyreyjrxljshhvef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444309.1452637-252-238035141879496/AnsiballZ_command.py'
Jan 26 16:18:29 compute-0 sudo[91668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:29 compute-0 python3.9[91670]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:29 compute-0 sudo[91668]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:30 compute-0 sudo[91821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thyjkpmyshwbdqklofmaownaqlbmiaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444309.7945511-260-13656258670831/AnsiballZ_stat.py'
Jan 26 16:18:30 compute-0 sudo[91821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:30 compute-0 python3.9[91823]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:18:30 compute-0 sudo[91821]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:30 compute-0 sudo[91975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwuideorcpcsqpmnuydmpxonqonmcjxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444310.6935647-268-57056709117241/AnsiballZ_command.py'
Jan 26 16:18:30 compute-0 sudo[91975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:31 compute-0 python3.9[91977]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:31 compute-0 sudo[91975]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:31 compute-0 sudo[92130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgvjxhzsrcynmyilpgwdavczlqvcuryy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444311.350142-276-83655145661951/AnsiballZ_file.py'
Jan 26 16:18:31 compute-0 sudo[92130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:31 compute-0 python3.9[92132]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:32 compute-0 sudo[92130]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:33 compute-0 python3.9[92282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:18:34 compute-0 sudo[92433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thzulakcrcoiolbqbxjvegewrecdjwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444313.8549201-316-164094787482128/AnsiballZ_command.py'
Jan 26 16:18:34 compute-0 sudo[92433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:34 compute-0 python3.9[92435]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:34 compute-0 ovs-vsctl[92436]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 26 16:18:34 compute-0 sudo[92433]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:34 compute-0 sudo[92586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnlsvtcylrinkjszhprbyprzgjezdjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444314.543498-325-211923717250340/AnsiballZ_command.py'
Jan 26 16:18:34 compute-0 sudo[92586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:35 compute-0 python3.9[92588]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:35 compute-0 sudo[92586]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:35 compute-0 sudo[92741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piralwklnlomfihamdruppjsfnxizobn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444315.2765334-333-180360216575034/AnsiballZ_command.py'
Jan 26 16:18:35 compute-0 sudo[92741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:35 compute-0 python3.9[92743]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:18:35 compute-0 ovs-vsctl[92744]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 26 16:18:35 compute-0 sudo[92741]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:36 compute-0 python3.9[92894]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:18:36 compute-0 sudo[93046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lihcmgrtvfcqgmmdnrrqblmnijubisuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444316.679984-350-115936128960908/AnsiballZ_file.py'
Jan 26 16:18:36 compute-0 sudo[93046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:37 compute-0 python3.9[93048]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:37 compute-0 sudo[93046]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:37 compute-0 sudo[93198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqyhqinjdzkjexuajxyxfguqwjwhhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444317.3396058-358-118102662076739/AnsiballZ_stat.py'
Jan 26 16:18:37 compute-0 sudo[93198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:37 compute-0 python3.9[93200]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:37 compute-0 sudo[93198]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:38 compute-0 sudo[93276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwegfzjpfyvvbvtxfeuqlooiucuediah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444317.3396058-358-118102662076739/AnsiballZ_file.py'
Jan 26 16:18:38 compute-0 sudo[93276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:38 compute-0 python3.9[93278]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:38 compute-0 sudo[93276]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:38 compute-0 sudo[93428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqezdhxoheytgzxsqwkyczeduwrefcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444318.4234812-358-24046449865878/AnsiballZ_stat.py'
Jan 26 16:18:38 compute-0 sudo[93428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:38 compute-0 python3.9[93430]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:38 compute-0 sudo[93428]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:39 compute-0 sudo[93506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhwydgjqqcvkyureoujyvmjzrmszjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444318.4234812-358-24046449865878/AnsiballZ_file.py'
Jan 26 16:18:39 compute-0 sudo[93506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:39 compute-0 python3.9[93508]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:39 compute-0 sudo[93506]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:39 compute-0 sudo[93658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxpcuhcztcucyclvomegfloqhrzgtyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444319.4949844-381-150239156535667/AnsiballZ_file.py'
Jan 26 16:18:39 compute-0 sudo[93658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:39 compute-0 python3.9[93660]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:39 compute-0 sudo[93658]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:40 compute-0 sudo[93810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivwqakhpkhubutlkkkfilwujjmmqtev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444320.1105707-389-155375997997905/AnsiballZ_stat.py'
Jan 26 16:18:40 compute-0 sudo[93810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:40 compute-0 python3.9[93812]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:40 compute-0 sudo[93810]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:40 compute-0 sudo[93888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naisukvynqelpjmnujzhdwrtoredkjvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444320.1105707-389-155375997997905/AnsiballZ_file.py'
Jan 26 16:18:40 compute-0 sudo[93888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:41 compute-0 python3.9[93890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:41 compute-0 sudo[93888]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:41 compute-0 sudo[94040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtftmbutmybfbwctybzudwlnfmusylzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444321.2050498-401-274006702891360/AnsiballZ_stat.py'
Jan 26 16:18:41 compute-0 sudo[94040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:41 compute-0 python3.9[94042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:41 compute-0 sudo[94040]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:41 compute-0 sudo[94118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgqzjetdutzubofplmsqgtjozbgwdvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444321.2050498-401-274006702891360/AnsiballZ_file.py'
Jan 26 16:18:41 compute-0 sudo[94118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:42 compute-0 python3.9[94120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:42 compute-0 sudo[94118]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:42 compute-0 sudo[94270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtacotsyacudbcvswoqdlykzixhdopme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444322.2678688-413-198089173018866/AnsiballZ_systemd.py'
Jan 26 16:18:42 compute-0 sudo[94270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:43 compute-0 python3.9[94272]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:18:43 compute-0 systemd[1]: Reloading.
Jan 26 16:18:43 compute-0 systemd-rc-local-generator[94299]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:18:43 compute-0 systemd-sysv-generator[94302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:18:43 compute-0 sudo[94270]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:43 compute-0 sudo[94459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhxecevmfiohfhonxjuejnmturmpbes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444323.466249-421-169404112698726/AnsiballZ_stat.py'
Jan 26 16:18:43 compute-0 sudo[94459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:43 compute-0 python3.9[94461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:44 compute-0 sudo[94459]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:44 compute-0 sudo[94537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwdixmcpuiaudofjdeumfqtikjjfibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444323.466249-421-169404112698726/AnsiballZ_file.py'
Jan 26 16:18:44 compute-0 sudo[94537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:44 compute-0 python3.9[94539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:44 compute-0 sudo[94537]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:45 compute-0 sudo[94689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eifworaguaziqsxrysxvnvglfxfnddzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444324.7632453-433-232806816759506/AnsiballZ_stat.py'
Jan 26 16:18:45 compute-0 sudo[94689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:45 compute-0 python3.9[94691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:45 compute-0 sudo[94689]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:45 compute-0 sudo[94767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxwziubaowcsyzagietkwhtbhaneqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444324.7632453-433-232806816759506/AnsiballZ_file.py'
Jan 26 16:18:45 compute-0 sudo[94767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:45 compute-0 python3.9[94769]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:45 compute-0 sudo[94767]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:46 compute-0 sudo[94919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uskaglrzqdscjyjifqcceviytmbnedqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444326.0054927-445-270783048803642/AnsiballZ_systemd.py'
Jan 26 16:18:46 compute-0 sudo[94919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:46 compute-0 python3.9[94921]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:18:46 compute-0 systemd[1]: Reloading.
Jan 26 16:18:46 compute-0 systemd-sysv-generator[94952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:18:46 compute-0 systemd-rc-local-generator[94947]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:18:46 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 16:18:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 16:18:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 16:18:46 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 16:18:46 compute-0 sudo[94919]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:47 compute-0 sudo[95113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alduqoqnnpiowsdgpwzyrghongmeasgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444327.204676-455-259588797023518/AnsiballZ_file.py'
Jan 26 16:18:47 compute-0 sudo[95113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:47 compute-0 python3.9[95115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:47 compute-0 sudo[95113]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:48 compute-0 sudo[95265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahusgbqgugnvrdtddwqstilgazkhfmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444327.8521905-463-259023814849236/AnsiballZ_stat.py'
Jan 26 16:18:48 compute-0 sudo[95265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:48 compute-0 python3.9[95267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:48 compute-0 sudo[95265]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:48 compute-0 sudo[95388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlnmhndaaxtmnebwpjnexfyfthzzhqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444327.8521905-463-259023814849236/AnsiballZ_copy.py'
Jan 26 16:18:48 compute-0 sudo[95388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:49 compute-0 python3.9[95390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444327.8521905-463-259023814849236/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:49 compute-0 sudo[95388]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:49 compute-0 sudo[95540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exkvnvgqfibubesryyeuuxegfoddoxhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444329.6076694-480-9684414465558/AnsiballZ_file.py'
Jan 26 16:18:49 compute-0 sudo[95540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:50 compute-0 python3.9[95542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:50 compute-0 sudo[95540]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:50 compute-0 sudo[95692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrccdxeaawjiesqwwcsmyyxrlfqussw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444330.3580425-488-212851196300384/AnsiballZ_file.py'
Jan 26 16:18:50 compute-0 sudo[95692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:51 compute-0 python3.9[95694]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:18:51 compute-0 sudo[95692]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:51 compute-0 sudo[95844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvuefebtuoawrhxpdfgepmoicbszswlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444331.2101414-496-262484929463195/AnsiballZ_stat.py'
Jan 26 16:18:51 compute-0 sudo[95844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:51 compute-0 python3.9[95846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:18:51 compute-0 sudo[95844]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:52 compute-0 sudo[95967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctuawmkkmrxjpshyrrylsvdcfaayjdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444331.2101414-496-262484929463195/AnsiballZ_copy.py'
Jan 26 16:18:52 compute-0 sudo[95967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:52 compute-0 python3.9[95969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444331.2101414-496-262484929463195/.source.json _original_basename=.44z__7bj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:52 compute-0 sudo[95967]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:52 compute-0 python3.9[96119]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:55 compute-0 sudo[96540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpwvfgwpptffgpbpdilbvyuydaygdqes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444334.7044423-536-131843694033018/AnsiballZ_container_config_data.py'
Jan 26 16:18:55 compute-0 sudo[96540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:55 compute-0 python3.9[96542]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 26 16:18:55 compute-0 sudo[96540]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:56 compute-0 sudo[96692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjdebwcscntpxldhciyjzaiwfovakuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444335.7141504-547-110101094134845/AnsiballZ_container_config_hash.py'
Jan 26 16:18:56 compute-0 sudo[96692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:56 compute-0 python3.9[96694]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:18:56 compute-0 sudo[96692]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:57 compute-0 sudo[96844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxkdbamymvjdsmlvsbydgjfadrvujue ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444336.9693997-557-102495745873309/AnsiballZ_edpm_container_manage.py'
Jan 26 16:18:57 compute-0 sudo[96844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:57 compute-0 python3[96846]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:18:57 compute-0 podman[96885]: 2026-01-26 16:18:57.994183264 +0000 UTC m=+0.073719314 container create 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 16:18:57 compute-0 podman[96885]: 2026-01-26 16:18:57.943579575 +0000 UTC m=+0.023115645 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 16:18:58 compute-0 python3[96846]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 16:18:58 compute-0 sudo[96844]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:58 compute-0 sudo[97073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dujnuitufwmjvytwcyukvdmxxquutiey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444338.2962923-565-176625423601444/AnsiballZ_stat.py'
Jan 26 16:18:58 compute-0 sudo[97073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:58 compute-0 python3.9[97075]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:18:58 compute-0 sudo[97073]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 16:18:59 compute-0 sudo[97227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbbnhydgeakqjpkioltsscyxvuthgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444338.9888105-574-150539406033485/AnsiballZ_file.py'
Jan 26 16:18:59 compute-0 sudo[97227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:59 compute-0 python3.9[97229]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:18:59 compute-0 sudo[97227]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:59 compute-0 sudo[97303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sriqpcpydrlulkmwljpqikufofuqmutg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444338.9888105-574-150539406033485/AnsiballZ_stat.py'
Jan 26 16:18:59 compute-0 sudo[97303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:18:59 compute-0 python3.9[97305]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:18:59 compute-0 sudo[97303]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:00 compute-0 sudo[97454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhylhgirwlnzlhpzswrhphzukshoclcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444339.9573133-574-213707740677513/AnsiballZ_copy.py'
Jan 26 16:19:00 compute-0 sudo[97454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:00 compute-0 python3.9[97456]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444339.9573133-574-213707740677513/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:00 compute-0 sudo[97454]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:00 compute-0 sudo[97530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhftfnmcbtpcnivnypsxgakuwoeawrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444339.9573133-574-213707740677513/AnsiballZ_systemd.py'
Jan 26 16:19:00 compute-0 sudo[97530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:01 compute-0 python3.9[97532]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:19:01 compute-0 systemd[1]: Reloading.
Jan 26 16:19:01 compute-0 systemd-sysv-generator[97563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:01 compute-0 systemd-rc-local-generator[97559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:01 compute-0 sudo[97530]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:01 compute-0 sudo[97641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkmeyvyhsrngkktkatbsrwbvbygunaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444339.9573133-574-213707740677513/AnsiballZ_systemd.py'
Jan 26 16:19:01 compute-0 sudo[97641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:02 compute-0 python3.9[97643]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:19:02 compute-0 systemd[1]: Reloading.
Jan 26 16:19:02 compute-0 systemd-rc-local-generator[97673]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:02 compute-0 systemd-sysv-generator[97676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:02 compute-0 systemd[1]: Starting ovn_controller container...
Jan 26 16:19:02 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 26 16:19:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e90ed44584e2c265ab9087383083d6e6bbd8223752ec800b57d09eb7a4d9db3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 26 16:19:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.
Jan 26 16:19:02 compute-0 podman[97684]: 2026-01-26 16:19:02.516323968 +0000 UTC m=+0.143159753 container init 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 16:19:02 compute-0 ovn_controller[97699]: + sudo -E kolla_set_configs
Jan 26 16:19:02 compute-0 podman[97684]: 2026-01-26 16:19:02.553658188 +0000 UTC m=+0.180493963 container start 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:19:02 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 26 16:19:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 26 16:19:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 26 16:19:02 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 26 16:19:02 compute-0 systemd[97719]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 26 16:19:02 compute-0 edpm-start-podman-container[97684]: ovn_controller
Jan 26 16:19:02 compute-0 edpm-start-podman-container[97683]: Creating additional drop-in dependency for "ovn_controller" (6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d)
Jan 26 16:19:02 compute-0 systemd[97719]: Queued start job for default target Main User Target.
Jan 26 16:19:02 compute-0 systemd[1]: Reloading.
Jan 26 16:19:02 compute-0 systemd[97719]: Created slice User Application Slice.
Jan 26 16:19:02 compute-0 systemd[97719]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 26 16:19:02 compute-0 systemd[97719]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 16:19:02 compute-0 systemd[97719]: Reached target Paths.
Jan 26 16:19:02 compute-0 systemd[97719]: Reached target Timers.
Jan 26 16:19:02 compute-0 podman[97705]: 2026-01-26 16:19:02.754755287 +0000 UTC m=+0.191231474 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:19:02 compute-0 systemd[97719]: Starting D-Bus User Message Bus Socket...
Jan 26 16:19:02 compute-0 systemd[97719]: Starting Create User's Volatile Files and Directories...
Jan 26 16:19:02 compute-0 systemd[97719]: Finished Create User's Volatile Files and Directories.
Jan 26 16:19:02 compute-0 systemd[97719]: Listening on D-Bus User Message Bus Socket.
Jan 26 16:19:02 compute-0 systemd[97719]: Reached target Sockets.
Jan 26 16:19:02 compute-0 systemd[97719]: Reached target Basic System.
Jan 26 16:19:02 compute-0 systemd[97719]: Reached target Main User Target.
Jan 26 16:19:02 compute-0 systemd[97719]: Startup finished in 184ms.
Jan 26 16:19:02 compute-0 systemd-rc-local-generator[97790]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:02 compute-0 systemd-sysv-generator[97796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:02 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 26 16:19:02 compute-0 systemd[1]: Started ovn_controller container.
Jan 26 16:19:02 compute-0 systemd[1]: 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d-38ac5ae540beb07d.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:19:02 compute-0 systemd[1]: 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d-38ac5ae540beb07d.service: Failed with result 'exit-code'.
Jan 26 16:19:03 compute-0 systemd[1]: Started Session c1 of User root.
Jan 26 16:19:03 compute-0 sudo[97641]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:03 compute-0 ovn_controller[97699]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:19:03 compute-0 ovn_controller[97699]: INFO:__main__:Validating config file
Jan 26 16:19:03 compute-0 ovn_controller[97699]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:19:03 compute-0 ovn_controller[97699]: INFO:__main__:Writing out command to execute
Jan 26 16:19:03 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: ++ cat /run_command
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + ARGS=
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + sudo kolla_copy_cacerts
Jan 26 16:19:03 compute-0 systemd[1]: Started Session c2 of User root.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + [[ ! -n '' ]]
Jan 26 16:19:03 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + . kolla_extend_start
Jan 26 16:19:03 compute-0 ovn_controller[97699]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + umask 0022
Jan 26 16:19:03 compute-0 ovn_controller[97699]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.1787] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.1795] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <warn>  [1769444343.1800] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.1805] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.1810] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.1815] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 16:19:03 compute-0 kernel: br-int: entered promiscuous mode
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00023|main|INFO|OVS feature set changed, force recompute.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 16:19:03 compute-0 ovn_controller[97699]: 2026-01-26T16:19:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.2063] manager: (ovn-63d74f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 26 16:19:03 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.2262] device (genev_sys_6081): carrier: link connected
Jan 26 16:19:03 compute-0 NetworkManager[56253]: <info>  [1769444343.2265] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 26 16:19:03 compute-0 systemd-udevd[97837]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:03 compute-0 systemd-udevd[97841]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:04 compute-0 python3.9[97969]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:19:04 compute-0 sudo[98119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlbswubrefbgaabqcovdqutrwmcfnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444344.4317682-619-77568828963060/AnsiballZ_stat.py'
Jan 26 16:19:04 compute-0 sudo[98119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:04 compute-0 python3.9[98121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:04 compute-0 sudo[98119]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:05 compute-0 sudo[98242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcdcvgkdfqxetjnsmxpzwjyhmdgarrlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444344.4317682-619-77568828963060/AnsiballZ_copy.py'
Jan 26 16:19:05 compute-0 sudo[98242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:05 compute-0 python3.9[98244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444344.4317682-619-77568828963060/.source.yaml _original_basename=.embbe6w9 follow=False checksum=618dda1ed790a631fa9ee86c2cc53e7a4e99e078 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:05 compute-0 sudo[98242]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:05 compute-0 sudo[98394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpbzowzpqxmzasxrynznsotqykcegbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444345.6327603-634-139169108457220/AnsiballZ_command.py'
Jan 26 16:19:05 compute-0 sudo[98394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:06 compute-0 python3.9[98396]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:19:06 compute-0 ovs-vsctl[98397]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 26 16:19:06 compute-0 sudo[98394]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:06 compute-0 sudo[98547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizjknivjnkgkujrjemjldarnznzhgwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444346.3304644-642-97386629287834/AnsiballZ_command.py'
Jan 26 16:19:06 compute-0 sudo[98547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:06 compute-0 python3.9[98549]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:19:06 compute-0 ovs-vsctl[98551]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 26 16:19:06 compute-0 sudo[98547]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:07 compute-0 sudo[98702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrquxqzpbeoeaknnsowqlaqpeeisgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444347.1638882-656-143537622925353/AnsiballZ_command.py'
Jan 26 16:19:07 compute-0 sudo[98702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:07 compute-0 python3.9[98704]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:19:07 compute-0 ovs-vsctl[98705]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 26 16:19:07 compute-0 sudo[98702]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:08 compute-0 sshd-session[87219]: Connection closed by 192.168.122.30 port 58160
Jan 26 16:19:08 compute-0 sshd-session[87216]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:19:08 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Jan 26 16:19:08 compute-0 systemd[1]: session-20.scope: Consumed 46.729s CPU time.
Jan 26 16:19:08 compute-0 systemd-logind[788]: Session 20 logged out. Waiting for processes to exit.
Jan 26 16:19:08 compute-0 systemd-logind[788]: Removed session 20.
Jan 26 16:19:13 compute-0 sshd-session[98730]: Accepted publickey for zuul from 192.168.122.30 port 46282 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:19:13 compute-0 systemd-logind[788]: New session 22 of user zuul.
Jan 26 16:19:13 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 26 16:19:13 compute-0 sshd-session[98730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:19:13 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 26 16:19:13 compute-0 systemd[97719]: Activating special unit Exit the Session...
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped target Main User Target.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped target Basic System.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped target Paths.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped target Sockets.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped target Timers.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 16:19:13 compute-0 systemd[97719]: Closed D-Bus User Message Bus Socket.
Jan 26 16:19:13 compute-0 systemd[97719]: Stopped Create User's Volatile Files and Directories.
Jan 26 16:19:13 compute-0 systemd[97719]: Removed slice User Application Slice.
Jan 26 16:19:13 compute-0 systemd[97719]: Reached target Shutdown.
Jan 26 16:19:13 compute-0 systemd[97719]: Finished Exit the Session.
Jan 26 16:19:13 compute-0 systemd[97719]: Reached target Exit the Session.
Jan 26 16:19:13 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 26 16:19:13 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 26 16:19:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 26 16:19:13 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 26 16:19:13 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 26 16:19:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 26 16:19:13 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 26 16:19:14 compute-0 python3.9[98887]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:19:15 compute-0 sudo[99041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeamnwuklmlbsqrwmnnshnjtslnvmhht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444354.6414838-29-68173159886041/AnsiballZ_file.py'
Jan 26 16:19:15 compute-0 sudo[99041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:15 compute-0 python3.9[99043]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:15 compute-0 sudo[99041]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:15 compute-0 sudo[99193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxqvkniuiotacaobhqyflvmsocwtncp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444355.502517-29-70557998760775/AnsiballZ_file.py'
Jan 26 16:19:15 compute-0 sudo[99193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:15 compute-0 python3.9[99195]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:15 compute-0 sudo[99193]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:16 compute-0 sudo[99345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpbespnozhldwzedhtuadugvdzatoju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444356.106271-29-220918900714974/AnsiballZ_file.py'
Jan 26 16:19:16 compute-0 sudo[99345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:16 compute-0 python3.9[99347]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:16 compute-0 sudo[99345]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:17 compute-0 sudo[99497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfxatqxqzpckgahoxysswreisxeenhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444356.723669-29-149065491522570/AnsiballZ_file.py'
Jan 26 16:19:17 compute-0 sudo[99497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:17 compute-0 python3.9[99499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:17 compute-0 sudo[99497]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:17 compute-0 sudo[99649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfnzabjxzboobnbejibqsjvpmtxxcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444357.3690517-29-97133400926918/AnsiballZ_file.py'
Jan 26 16:19:17 compute-0 sudo[99649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:17 compute-0 python3.9[99651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:17 compute-0 sudo[99649]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:18 compute-0 python3.9[99801]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:19:19 compute-0 sudo[99951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzqwkbncusamzoajbjndkohsdqjlmxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444358.695018-73-249047143111031/AnsiballZ_seboolean.py'
Jan 26 16:19:19 compute-0 sudo[99951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:19 compute-0 python3.9[99953]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 16:19:19 compute-0 sudo[99951]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:20 compute-0 python3.9[100103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:21 compute-0 python3.9[100225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444360.1800709-81-246710799644374/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:22 compute-0 python3.9[100375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:22 compute-0 python3.9[100496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444361.6941962-96-206755829586049/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:23 compute-0 sudo[100646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huagccirqeqdngncfxngdlzneraldnyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444363.0685043-113-151622529812524/AnsiballZ_setup.py'
Jan 26 16:19:23 compute-0 sudo[100646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:23 compute-0 python3.9[100648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:19:23 compute-0 sudo[100646]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:24 compute-0 sudo[100730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqznfpsaodnqaoyhstfkmriacckdsfeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444363.0685043-113-151622529812524/AnsiballZ_dnf.py'
Jan 26 16:19:24 compute-0 sudo[100730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:24 compute-0 python3.9[100732]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:19:25 compute-0 sudo[100730]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:26 compute-0 sudo[100883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofbyoiaunboxejthiwxwpjwofeyncpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444366.033096-125-240340459476732/AnsiballZ_systemd.py'
Jan 26 16:19:26 compute-0 sudo[100883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:26 compute-0 python3.9[100885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:19:27 compute-0 sudo[100883]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:27 compute-0 python3.9[101038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:28 compute-0 python3.9[101159]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444367.285842-133-58846208972162/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:28 compute-0 python3.9[101309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:29 compute-0 python3.9[101430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444368.39143-133-61114736676249/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:30 compute-0 python3.9[101580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:31 compute-0 python3.9[101701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444370.0998847-177-107858740926992/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:31 compute-0 python3.9[101851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:32 compute-0 python3.9[101972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444371.3002598-177-120490525474892/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:32 compute-0 python3.9[102122]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:19:33 compute-0 ovn_controller[97699]: 2026-01-26T16:19:33Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Jan 26 16:19:33 compute-0 ovn_controller[97699]: 2026-01-26T16:19:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 26 16:19:33 compute-0 podman[102130]: 2026-01-26 16:19:33.242685126 +0000 UTC m=+0.122328580 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:19:33 compute-0 sudo[102300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqplhpowsujtmlflzwjsksoxexaoteim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444373.343335-215-225576464926925/AnsiballZ_file.py'
Jan 26 16:19:33 compute-0 sudo[102300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:33 compute-0 python3.9[102302]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:33 compute-0 sudo[102300]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:34 compute-0 sudo[102452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmmmdzvmaduddwcygcbndbcyberxept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444374.0336869-223-234911013361088/AnsiballZ_stat.py'
Jan 26 16:19:34 compute-0 sudo[102452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:34 compute-0 python3.9[102454]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:34 compute-0 sudo[102452]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:34 compute-0 sudo[102530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lknhldlzscaoqvlseqjuzdfdtmuhpjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444374.0336869-223-234911013361088/AnsiballZ_file.py'
Jan 26 16:19:34 compute-0 sudo[102530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:34 compute-0 python3.9[102532]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:35 compute-0 sudo[102530]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:35 compute-0 sudo[102682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpfozuyjqjiownreervjgxourkpaopwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444375.1427011-223-159174018557318/AnsiballZ_stat.py'
Jan 26 16:19:35 compute-0 sudo[102682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:35 compute-0 python3.9[102684]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:35 compute-0 sudo[102682]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:36 compute-0 sudo[102760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhpazvlpxvgzggxzbamrbrkztbpyfrjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444375.1427011-223-159174018557318/AnsiballZ_file.py'
Jan 26 16:19:36 compute-0 sudo[102760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:36 compute-0 python3.9[102762]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:36 compute-0 sudo[102760]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:36 compute-0 sudo[102912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrzxrcstzbvzyjrmbtcexlkbogofuke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444376.4760098-246-64979244853794/AnsiballZ_file.py'
Jan 26 16:19:36 compute-0 sudo[102912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:36 compute-0 python3.9[102914]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:37 compute-0 sudo[102912]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:37 compute-0 sudo[103064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhxmpsjxhwtphmzoafujgpqpqbcavwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444377.1827195-254-2245508221968/AnsiballZ_stat.py'
Jan 26 16:19:37 compute-0 sudo[103064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:37 compute-0 python3.9[103066]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:37 compute-0 sudo[103064]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:38 compute-0 sudo[103142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aealikhrqmmwqqyfyejjfysxqfyiwins ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444377.1827195-254-2245508221968/AnsiballZ_file.py'
Jan 26 16:19:38 compute-0 sudo[103142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:38 compute-0 python3.9[103144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:38 compute-0 sudo[103142]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:39 compute-0 sudo[103294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqrinsnvsdvblsqfqxkmtlqwzvvptpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444378.5495238-266-148491054204967/AnsiballZ_stat.py'
Jan 26 16:19:39 compute-0 sudo[103294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:39 compute-0 python3.9[103296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:39 compute-0 sudo[103294]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:39 compute-0 sudo[103372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqyfzhqzlcoumqpfcprrlrijsmynzgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444378.5495238-266-148491054204967/AnsiballZ_file.py'
Jan 26 16:19:39 compute-0 sudo[103372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:39 compute-0 python3.9[103374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:39 compute-0 sudo[103372]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:40 compute-0 sudo[103525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnmpnqzsmpnqphreyubhyxkxktdmmdwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444379.88371-278-109984614419187/AnsiballZ_systemd.py'
Jan 26 16:19:40 compute-0 sudo[103525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:40 compute-0 python3.9[103527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:19:40 compute-0 systemd[1]: Reloading.
Jan 26 16:19:40 compute-0 systemd-rc-local-generator[103555]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:40 compute-0 systemd-sysv-generator[103559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:40 compute-0 sudo[103525]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:41 compute-0 sudo[103714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nffraknkqjatxaaszlpvlubuynzrligh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444381.0248363-286-184340849544081/AnsiballZ_stat.py'
Jan 26 16:19:41 compute-0 sudo[103714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:41 compute-0 python3.9[103716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:41 compute-0 sudo[103714]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:41 compute-0 sudo[103792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlbtkzdcwgkekitjtmltgtgorsmsijoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444381.0248363-286-184340849544081/AnsiballZ_file.py'
Jan 26 16:19:41 compute-0 sudo[103792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:41 compute-0 python3.9[103794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:42 compute-0 sudo[103792]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:42 compute-0 sudo[103944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yguhhnvrcepgqwpjxktrnzhqycunbjpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444382.3666542-298-202707071307129/AnsiballZ_stat.py'
Jan 26 16:19:42 compute-0 sudo[103944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:42 compute-0 python3.9[103946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:42 compute-0 sudo[103944]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:43 compute-0 sudo[104022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buncunaxuyxmlmkvbsrjdbdinumaqmre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444382.3666542-298-202707071307129/AnsiballZ_file.py'
Jan 26 16:19:43 compute-0 sudo[104022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:43 compute-0 python3.9[104024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:43 compute-0 sudo[104022]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:43 compute-0 sudo[104174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwkyzttjjuvejoizbovwfscqryijqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444383.4994395-310-204806860584840/AnsiballZ_systemd.py'
Jan 26 16:19:43 compute-0 sudo[104174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:44 compute-0 python3.9[104176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:19:44 compute-0 systemd[1]: Reloading.
Jan 26 16:19:44 compute-0 systemd-rc-local-generator[104203]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:44 compute-0 systemd-sysv-generator[104207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:44 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 16:19:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 16:19:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 16:19:44 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 16:19:44 compute-0 sudo[104174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:45 compute-0 sudo[104368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvukjrhxwnrxuzakmdjyqiylukthxgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444384.6973486-320-240925823779492/AnsiballZ_file.py'
Jan 26 16:19:45 compute-0 sudo[104368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:45 compute-0 python3.9[104370]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:45 compute-0 sudo[104368]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:45 compute-0 sudo[104520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftettytqafweeogvlkprpwmhinnzwkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444385.3786983-328-210233524430817/AnsiballZ_stat.py'
Jan 26 16:19:45 compute-0 sudo[104520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:46 compute-0 python3.9[104522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:46 compute-0 sudo[104520]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:46 compute-0 sudo[104643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mayjtdlitompvbuzewlbpafyntdawcel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444385.3786983-328-210233524430817/AnsiballZ_copy.py'
Jan 26 16:19:46 compute-0 sudo[104643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:46 compute-0 python3.9[104645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444385.3786983-328-210233524430817/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:46 compute-0 sudo[104643]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:47 compute-0 sudo[104795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiacoazbsmpaoumlkyawcikkqyvchacj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444386.9372087-345-12336956519614/AnsiballZ_file.py'
Jan 26 16:19:47 compute-0 sudo[104795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:47 compute-0 python3.9[104797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:47 compute-0 sudo[104795]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:47 compute-0 sudo[104947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfocosavrimbgbokxmsfjaizcvscteyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444387.625133-353-4092525901908/AnsiballZ_file.py'
Jan 26 16:19:47 compute-0 sudo[104947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:48 compute-0 python3.9[104949]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:19:48 compute-0 sudo[104947]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:48 compute-0 sudo[105099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrmhamtugtydmxuflouzuwncpbyksxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444388.3135502-361-204620228842263/AnsiballZ_stat.py'
Jan 26 16:19:48 compute-0 sudo[105099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:48 compute-0 python3.9[105101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:19:48 compute-0 sudo[105099]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:49 compute-0 sudo[105222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbtgfznvzbcbulrgbqryyzjzahnspmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444388.3135502-361-204620228842263/AnsiballZ_copy.py'
Jan 26 16:19:49 compute-0 sudo[105222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:49 compute-0 python3.9[105224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444388.3135502-361-204620228842263/.source.json _original_basename=.w9m6y_zd follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:49 compute-0 sudo[105222]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:50 compute-0 python3.9[105374]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:52 compute-0 sudo[105795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwfatqhlnicavmyyzzhxcdzppbukrcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444391.6640632-401-23477760484698/AnsiballZ_container_config_data.py'
Jan 26 16:19:52 compute-0 sudo[105795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:52 compute-0 python3.9[105797]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 26 16:19:52 compute-0 sudo[105795]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:53 compute-0 sudo[105947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzdctbglxjjjbcfquuwdrszhzrhjjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444392.6453729-412-194032812112865/AnsiballZ_container_config_hash.py'
Jan 26 16:19:53 compute-0 sudo[105947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:53 compute-0 python3.9[105949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:19:53 compute-0 sudo[105947]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:54 compute-0 sudo[106099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecmkuambuqrscdgljzkzmqjpfonoiscs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444393.935646-422-36060955294072/AnsiballZ_edpm_container_manage.py'
Jan 26 16:19:54 compute-0 sudo[106099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:54 compute-0 python3[106101]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:19:54 compute-0 podman[106136]: 2026-01-26 16:19:54.950496321 +0000 UTC m=+0.065401488 container create 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:19:54 compute-0 podman[106136]: 2026-01-26 16:19:54.925828197 +0000 UTC m=+0.040733404 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:19:54 compute-0 python3[106101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:19:55 compute-0 sudo[106099]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:55 compute-0 sudo[106324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qthljnyyhwimjytbulokumcdftbauvoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444395.2390397-430-134683447271952/AnsiballZ_stat.py'
Jan 26 16:19:55 compute-0 sudo[106324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:55 compute-0 python3.9[106326]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:19:55 compute-0 sudo[106324]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:56 compute-0 sudo[106478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhnjfjbdesiarhtudnlbtqgmbmrmzpvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444396.0245504-439-3940895619099/AnsiballZ_file.py'
Jan 26 16:19:56 compute-0 sudo[106478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:56 compute-0 python3.9[106480]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:56 compute-0 sudo[106478]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:56 compute-0 sudo[106554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjscncjnmbixboobapkfgffoagflvylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444396.0245504-439-3940895619099/AnsiballZ_stat.py'
Jan 26 16:19:56 compute-0 sudo[106554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:56 compute-0 python3.9[106556]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:19:56 compute-0 sudo[106554]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:57 compute-0 sudo[106705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixpwzqmsrucgolpaxjutnqqpssvptyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444397.0019267-439-101659015272505/AnsiballZ_copy.py'
Jan 26 16:19:57 compute-0 sudo[106705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:57 compute-0 python3.9[106707]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444397.0019267-439-101659015272505/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:19:57 compute-0 sudo[106705]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:58 compute-0 sudo[106781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmqkrlntwqzseqgkryzewsrpmoypkbye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444397.0019267-439-101659015272505/AnsiballZ_systemd.py'
Jan 26 16:19:58 compute-0 sudo[106781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:58 compute-0 python3.9[106783]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:19:58 compute-0 systemd[1]: Reloading.
Jan 26 16:19:58 compute-0 systemd-rc-local-generator[106806]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:58 compute-0 systemd-sysv-generator[106812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:58 compute-0 sudo[106781]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:58 compute-0 sudo[106891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpcacjnfkdcfwcdrwxfuvdwqrafbqfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444397.0019267-439-101659015272505/AnsiballZ_systemd.py'
Jan 26 16:19:58 compute-0 sudo[106891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:19:59 compute-0 python3.9[106893]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:19:59 compute-0 systemd[1]: Reloading.
Jan 26 16:19:59 compute-0 systemd-rc-local-generator[106923]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:59 compute-0 systemd-sysv-generator[106926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:59 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 26 16:19:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194cdbab03a0ceeb01817d18fe24a196ae0e760498ea7ac11dac6594867bd24f/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 26 16:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194cdbab03a0ceeb01817d18fe24a196ae0e760498ea7ac11dac6594867bd24f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:19:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.
Jan 26 16:19:59 compute-0 podman[106934]: 2026-01-26 16:19:59.607797842 +0000 UTC m=+0.119478754 container init 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + sudo -E kolla_set_configs
Jan 26 16:19:59 compute-0 podman[106934]: 2026-01-26 16:19:59.634857551 +0000 UTC m=+0.146538463 container start 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:19:59 compute-0 edpm-start-podman-container[106934]: ovn_metadata_agent
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Validating config file
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Copying service configuration files
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Writing out command to execute
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: ++ cat /run_command
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + CMD=neutron-ovn-metadata-agent
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + ARGS=
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + sudo kolla_copy_cacerts
Jan 26 16:19:59 compute-0 edpm-start-podman-container[106933]: Creating additional drop-in dependency for "ovn_metadata_agent" (881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6)
Jan 26 16:19:59 compute-0 podman[106957]: 2026-01-26 16:19:59.707731301 +0000 UTC m=+0.060232456 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 16:19:59 compute-0 systemd[1]: Reloading.
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + [[ ! -n '' ]]
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + . kolla_extend_start
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: Running command: 'neutron-ovn-metadata-agent'
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + umask 0022
Jan 26 16:19:59 compute-0 ovn_metadata_agent[106950]: + exec neutron-ovn-metadata-agent
Jan 26 16:19:59 compute-0 systemd-sysv-generator[107026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:19:59 compute-0 systemd-rc-local-generator[107022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:19:59 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 26 16:19:59 compute-0 sudo[106891]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:00 compute-0 python3.9[107185]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.652 106955 INFO neutron.common.config [-] Logging enabled!
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.652 106955 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.652 106955 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.653 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.653 106955 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.653 106955 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.653 106955 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.654 106955 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 sudo[107335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyeyvuymprrlojditapnyrvfuzkelpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444401.3452315-484-150824553542730/AnsiballZ_stat.py'
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.655 106955 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.656 106955 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.657 106955 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.658 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 sudo[107335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.659 106955 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.660 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.661 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.662 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.663 106955 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.664 106955 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.665 106955 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.666 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.667 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.668 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.669 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.670 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.671 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.672 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.673 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.674 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.675 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.676 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.677 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.678 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.679 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.680 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.681 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.682 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.683 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.684 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.685 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.686 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.687 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.688 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.689 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.690 106955 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.700 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.700 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.700 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.701 106955 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.701 106955 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.717 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1c72c11d-5050-47c3-89e8-912766588fb3 (UUID: 1c72c11d-5050-47c3-89e8-912766588fb3) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.752 106955 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.753 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.753 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.753 106955 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.757 106955 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.764 106955 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.772 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1c72c11d-5050-47c3-89e8-912766588fb3'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], external_ids={}, name=1c72c11d-5050-47c3-89e8-912766588fb3, nb_cfg_timestamp=1769444351207, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.773 106955 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7faee184b130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.774 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.774 106955 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.775 106955 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.775 106955 INFO oslo_service.service [-] Starting 1 workers
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.781 106955 DEBUG oslo_service.service [-] Started child 107338 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.785 107338 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167739'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.786 106955 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpfbr_g1yx/privsep.sock']
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.816 107338 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.817 107338 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.817 107338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.822 107338 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.832 107338 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 16:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:01.841 107338 INFO eventlet.wsgi.server [-] (107338) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 26 16:20:01 compute-0 python3.9[107337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:20:01 compute-0 sudo[107335]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:02 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 26 16:20:02 compute-0 sudo[107466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhjnaknynqeethngutcbhxjzmfvnvejh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444401.3452315-484-150824553542730/AnsiballZ_copy.py'
Jan 26 16:20:02 compute-0 sudo[107466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.537 106955 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.538 106955 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfbr_g1yx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.400 107449 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.404 107449 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.406 107449 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.407 107449 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107449
Jan 26 16:20:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:02.541 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[c350b2b3-18ca-47cd-85e1-d81fdc6fb691]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:02 compute-0 python3.9[107468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444401.3452315-484-150824553542730/.source.yaml _original_basename=.guuf06pa follow=False checksum=0831a0d92a44c83a1f90feefa4150efdb361f648 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:02 compute-0 sudo[107466]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:03 compute-0 sshd-session[98733]: Connection closed by 192.168.122.30 port 46282
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.077 107449 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.078 107449 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.078 107449 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:03 compute-0 sshd-session[98730]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:20:03 compute-0 systemd-logind[788]: Session 22 logged out. Waiting for processes to exit.
Jan 26 16:20:03 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 26 16:20:03 compute-0 systemd[1]: session-22.scope: Consumed 36.720s CPU time.
Jan 26 16:20:03 compute-0 systemd-logind[788]: Removed session 22.
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.684 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[acaf991b-d310-43ee-b652-387771ac43c8]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.686 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, column=external_ids, values=({'neutron:ovn-metadata-id': 'b87828e5-3f30-5d9e-a161-6a63a93ad5fd'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.696 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.703 106955 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.704 106955 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.704 106955 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.704 106955 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.704 106955 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.704 106955 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.705 106955 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.705 106955 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.705 106955 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.705 106955 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.705 106955 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.706 106955 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.707 106955 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.707 106955 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.707 106955 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.707 106955 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.707 106955 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.708 106955 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.708 106955 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.708 106955 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.708 106955 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.708 106955 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.709 106955 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.709 106955 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.709 106955 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.709 106955 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.709 106955 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.710 106955 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.711 106955 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.712 106955 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.713 106955 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.714 106955 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.715 106955 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.716 106955 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.717 106955 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.718 106955 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.719 106955 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.720 106955 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.720 106955 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.720 106955 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.720 106955 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.720 106955 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.721 106955 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.722 106955 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.723 106955 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.724 106955 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.725 106955 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.726 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.727 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.728 106955 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.729 106955 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.729 106955 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.729 106955 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.729 106955 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.729 106955 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.730 106955 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.731 106955 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.732 106955 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.733 106955 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.734 106955 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.735 106955 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.736 106955 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.737 106955 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.738 106955 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.739 106955 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.740 106955 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.741 106955 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.742 106955 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.743 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.744 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.745 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.746 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.747 106955 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.748 106955 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.748 106955 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.748 106955 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.748 106955 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:20:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:20:03.748 106955 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:20:04 compute-0 podman[107497]: 2026-01-26 16:20:04.21374036 +0000 UTC m=+0.094527152 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:20:08 compute-0 sshd-session[107524]: Accepted publickey for zuul from 192.168.122.30 port 33072 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:20:08 compute-0 systemd-logind[788]: New session 23 of user zuul.
Jan 26 16:20:08 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 26 16:20:08 compute-0 sshd-session[107524]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:20:09 compute-0 python3.9[107677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:20:11 compute-0 sudo[107831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qscxenrnalsnmmophbashzqugckfahzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444410.4693053-29-223235396741540/AnsiballZ_command.py'
Jan 26 16:20:11 compute-0 sudo[107831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:11 compute-0 python3.9[107833]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:11 compute-0 sudo[107831]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:12 compute-0 sudo[107996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggjbhvuypazpqylauujzqkrwdchegkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444411.7187843-40-101709156557757/AnsiballZ_systemd_service.py'
Jan 26 16:20:12 compute-0 sudo[107996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:12 compute-0 python3.9[107998]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:20:12 compute-0 systemd[1]: Reloading.
Jan 26 16:20:12 compute-0 systemd-rc-local-generator[108025]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:20:12 compute-0 systemd-sysv-generator[108029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:20:12 compute-0 sudo[107996]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:13 compute-0 python3.9[108183]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:20:13 compute-0 network[108200]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:20:13 compute-0 network[108201]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:20:13 compute-0 network[108202]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:20:16 compute-0 sudo[108461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irsqdzwgkcohpgghqiakyrjhhkkqxcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444416.6541505-59-46657779691546/AnsiballZ_systemd_service.py'
Jan 26 16:20:16 compute-0 sudo[108461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:17 compute-0 python3.9[108463]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:17 compute-0 sudo[108461]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:17 compute-0 sudo[108614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezscxsxdtgplksptxntwmvkqdklesitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444417.6017175-59-239720187406126/AnsiballZ_systemd_service.py'
Jan 26 16:20:17 compute-0 sudo[108614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:18 compute-0 python3.9[108616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:18 compute-0 sudo[108614]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:18 compute-0 sudo[108767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymfhzsklknnekxpcvcowjibzfnfvggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444418.4838507-59-215907258692270/AnsiballZ_systemd_service.py'
Jan 26 16:20:18 compute-0 sudo[108767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:19 compute-0 python3.9[108769]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:19 compute-0 sudo[108767]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:19 compute-0 sudo[108920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoiapnlkfnyzuvktpezinxnkoojsokxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444419.2824423-59-107558243222280/AnsiballZ_systemd_service.py'
Jan 26 16:20:19 compute-0 sudo[108920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:20 compute-0 python3.9[108922]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:20 compute-0 sudo[108920]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:20 compute-0 sudo[109073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdpopoeqpyipeqlgjvciegrnmiebqxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444420.2030506-59-183842283856551/AnsiballZ_systemd_service.py'
Jan 26 16:20:20 compute-0 sudo[109073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:20 compute-0 python3.9[109075]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:20 compute-0 sudo[109073]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:21 compute-0 sudo[109226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywxxvonstonnrrjirlsrjxnfdmyebls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444420.9791617-59-41444542430134/AnsiballZ_systemd_service.py'
Jan 26 16:20:21 compute-0 sudo[109226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:22 compute-0 python3.9[109228]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:22 compute-0 sudo[109226]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:22 compute-0 sudo[109379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnauxirfclgudkneprujqlpdkevspfua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444422.5566444-59-4963914410558/AnsiballZ_systemd_service.py'
Jan 26 16:20:22 compute-0 sudo[109379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:23 compute-0 python3.9[109381]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:20:23 compute-0 sudo[109379]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:24 compute-0 sudo[109532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkulbbsjmnftmgddrspsdieenabqmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444423.6007886-111-18438530806056/AnsiballZ_file.py'
Jan 26 16:20:24 compute-0 sudo[109532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:24 compute-0 python3.9[109534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:24 compute-0 sudo[109532]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:24 compute-0 sudo[109684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuuunkoxlyfheswszhudvsospwqxenxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444424.375153-111-221298363170719/AnsiballZ_file.py'
Jan 26 16:20:24 compute-0 sudo[109684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:24 compute-0 python3.9[109686]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:24 compute-0 sudo[109684]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:25 compute-0 sudo[109837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacjpcxiqmdyjvyboguyszqmrhyczahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444425.0240314-111-129874058103463/AnsiballZ_file.py'
Jan 26 16:20:25 compute-0 sudo[109837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:25 compute-0 python3.9[109839]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:25 compute-0 sudo[109837]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:25 compute-0 sudo[109989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btpirarqhdsjlbkdtsabhhhjmpervmxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444425.66108-111-219129713723618/AnsiballZ_file.py'
Jan 26 16:20:25 compute-0 sudo[109989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:26 compute-0 python3.9[109991]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:26 compute-0 sudo[109989]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:26 compute-0 sudo[110141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbvliuqnodpsnzibkqnfnaeagftxgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444426.2975485-111-59914135441787/AnsiballZ_file.py'
Jan 26 16:20:26 compute-0 sudo[110141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:26 compute-0 python3.9[110143]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:26 compute-0 sudo[110141]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:27 compute-0 sudo[110293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybryinwquiupcrxsenzzrqvisjarmuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444426.9308317-111-125733336583233/AnsiballZ_file.py'
Jan 26 16:20:27 compute-0 sudo[110293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:27 compute-0 python3.9[110295]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:27 compute-0 sudo[110293]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:28 compute-0 sudo[110445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nibdsmdbnklazafygnxwcgplsyyacnaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444427.7875054-111-125588182663613/AnsiballZ_file.py'
Jan 26 16:20:28 compute-0 sudo[110445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:28 compute-0 python3.9[110447]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:28 compute-0 sudo[110445]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:28 compute-0 sudo[110597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjbtnvgsyszmrmthrqzwixaafmvmwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444428.4988708-161-146636071221426/AnsiballZ_file.py'
Jan 26 16:20:28 compute-0 sudo[110597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:29 compute-0 python3.9[110599]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:29 compute-0 sudo[110597]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:29 compute-0 sudo[110749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrukbeoewtbkjbbztjzxdjfachunjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444429.242871-161-258103622476712/AnsiballZ_file.py'
Jan 26 16:20:29 compute-0 sudo[110749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:29 compute-0 python3.9[110751]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:29 compute-0 sudo[110749]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:30 compute-0 podman[110851]: 2026-01-26 16:20:30.189468884 +0000 UTC m=+0.069272924 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:20:30 compute-0 sudo[110920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfabgdppfrigduoyosqmmkdwicsmnjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444429.9011261-161-140419707284645/AnsiballZ_file.py'
Jan 26 16:20:30 compute-0 sudo[110920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:30 compute-0 python3.9[110922]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:30 compute-0 sudo[110920]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:30 compute-0 sudo[111072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znocqwbezpcsbmxjljepbqojkstwkloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444430.654991-161-166028227862869/AnsiballZ_file.py'
Jan 26 16:20:30 compute-0 sudo[111072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:31 compute-0 python3.9[111074]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:31 compute-0 sudo[111072]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:31 compute-0 sudo[111224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqbxxygxlplciqbpmufygzveompyewve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444431.2716115-161-48849515754346/AnsiballZ_file.py'
Jan 26 16:20:31 compute-0 sudo[111224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:31 compute-0 python3.9[111226]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:31 compute-0 sudo[111224]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:32 compute-0 sudo[111376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrpuziiequxvpkkcyfzowdtlomqxafzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444431.935625-161-241212624542336/AnsiballZ_file.py'
Jan 26 16:20:32 compute-0 sudo[111376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:32 compute-0 python3.9[111378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:32 compute-0 sudo[111376]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:32 compute-0 sudo[111528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ralzkxqwnoynkzdinffnozrznhpjqdsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444432.60998-161-49994741775921/AnsiballZ_file.py'
Jan 26 16:20:32 compute-0 sudo[111528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:33 compute-0 python3.9[111530]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:20:33 compute-0 sudo[111528]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:33 compute-0 sudo[111680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzbeodgfucjifodtqqfvctrpqbircmcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444433.3405056-212-100205401090101/AnsiballZ_command.py'
Jan 26 16:20:33 compute-0 sudo[111680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:33 compute-0 python3.9[111682]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:33 compute-0 sudo[111680]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:34 compute-0 podman[111808]: 2026-01-26 16:20:34.540938951 +0000 UTC m=+0.080661824 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:20:34 compute-0 python3.9[111845]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:20:35 compute-0 sudo[112011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyepwlelhiprerqzcyjagcopsjfbwsda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444434.9114184-230-263913050986119/AnsiballZ_systemd_service.py'
Jan 26 16:20:35 compute-0 sudo[112011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:35 compute-0 python3.9[112013]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:20:35 compute-0 systemd[1]: Reloading.
Jan 26 16:20:35 compute-0 systemd-rc-local-generator[112043]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:20:35 compute-0 systemd-sysv-generator[112046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:20:35 compute-0 sudo[112011]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:36 compute-0 sudo[112199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhgxiloueuilgnxljcymmrigdfqhdwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444436.0823605-238-151661839978069/AnsiballZ_command.py'
Jan 26 16:20:36 compute-0 sudo[112199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:36 compute-0 python3.9[112201]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:36 compute-0 sudo[112199]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:37 compute-0 sudo[112352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfaakihjdslympivuudpdpmuydmypbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444436.7712448-238-215723751991172/AnsiballZ_command.py'
Jan 26 16:20:37 compute-0 sudo[112352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:37 compute-0 python3.9[112354]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:37 compute-0 sudo[112352]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:37 compute-0 sudo[112505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aazrbqxaomlgviiairhcvulyxfybttma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444437.4411893-238-11643608781163/AnsiballZ_command.py'
Jan 26 16:20:37 compute-0 sudo[112505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:37 compute-0 python3.9[112507]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:37 compute-0 sudo[112505]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:38 compute-0 sudo[112658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qttxqptzontnveehjpypufwisohzireo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444438.077035-238-104141249688057/AnsiballZ_command.py'
Jan 26 16:20:38 compute-0 sudo[112658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:38 compute-0 python3.9[112660]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:38 compute-0 sudo[112658]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:38 compute-0 sudo[112811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiahtnkgiylaisgcvlwedijkrycwvemu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444438.7296782-238-221256459867845/AnsiballZ_command.py'
Jan 26 16:20:39 compute-0 sudo[112811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:39 compute-0 python3.9[112813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:39 compute-0 sudo[112811]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:39 compute-0 sudo[112964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbowraqsgwapzzjbdntuuhhwhzqqihdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444439.358176-238-214460369605013/AnsiballZ_command.py'
Jan 26 16:20:39 compute-0 sudo[112964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:39 compute-0 python3.9[112966]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:39 compute-0 sudo[112964]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:40 compute-0 sudo[113117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euoqqspyymyugkcxrfxqalkpotxgjanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444440.0907447-238-154276243445624/AnsiballZ_command.py'
Jan 26 16:20:40 compute-0 sudo[113117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:40 compute-0 python3.9[113119]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:20:40 compute-0 sudo[113117]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:41 compute-0 sudo[113270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrdmcnqlzrxzjeekxbconemxggeqecay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444440.9699426-292-177403473708178/AnsiballZ_getent.py'
Jan 26 16:20:41 compute-0 sudo[113270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:41 compute-0 python3.9[113272]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 26 16:20:41 compute-0 sudo[113270]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:42 compute-0 sudo[113423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covttcdpxkajkehvhtbhnyluqwgfhsla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444441.8740723-300-184945083087094/AnsiballZ_group.py'
Jan 26 16:20:42 compute-0 sudo[113423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:42 compute-0 python3.9[113425]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:20:43 compute-0 groupadd[113426]: group added to /etc/group: name=libvirt, GID=42473
Jan 26 16:20:43 compute-0 groupadd[113426]: group added to /etc/gshadow: name=libvirt
Jan 26 16:20:43 compute-0 groupadd[113426]: new group: name=libvirt, GID=42473
Jan 26 16:20:43 compute-0 sudo[113423]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:44 compute-0 sudo[113581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdcmkhvvozmbjjfjomzeyntibvqyuypi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444443.6803336-308-40551767980688/AnsiballZ_user.py'
Jan 26 16:20:44 compute-0 sudo[113581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:44 compute-0 python3.9[113583]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 16:20:44 compute-0 useradd[113585]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 16:20:44 compute-0 sudo[113581]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:45 compute-0 sudo[113741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txzgvxobuumkbtfcznekbdpwjlpiyzcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444445.2812731-319-242588345979113/AnsiballZ_setup.py'
Jan 26 16:20:45 compute-0 sudo[113741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:45 compute-0 python3.9[113743]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:20:46 compute-0 sudo[113741]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:46 compute-0 sudo[113825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woacuapawclkioxbtitgqyyuvgxeekji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444445.2812731-319-242588345979113/AnsiballZ_dnf.py'
Jan 26 16:20:46 compute-0 sudo[113825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:20:46 compute-0 python3.9[113827]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:21:01 compute-0 podman[114010]: 2026-01-26 16:21:01.180406841 +0000 UTC m=+0.062229623 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:21:01.694 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:21:01.695 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:21:01.695 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:05 compute-0 podman[114035]: 2026-01-26 16:21:05.30027364 +0000 UTC m=+0.164534433 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:21:14 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:21:14 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:21:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:21:32 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 26 16:21:32 compute-0 podman[114078]: 2026-01-26 16:21:32.20520347 +0000 UTC m=+0.078064189 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:21:36 compute-0 podman[114098]: 2026-01-26 16:21:36.207121595 +0000 UTC m=+0.094281217 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:22:01.695 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:22:01.696 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:22:01.696 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:03 compute-0 podman[129387]: 2026-01-26 16:22:03.22701571 +0000 UTC m=+0.102264461 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 26 16:22:07 compute-0 podman[130994]: 2026-01-26 16:22:07.224829783 +0000 UTC m=+0.109349707 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:22:20 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 16:22:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 16:22:21 compute-0 groupadd[131049]: group added to /etc/group: name=dnsmasq, GID=993
Jan 26 16:22:21 compute-0 groupadd[131049]: group added to /etc/gshadow: name=dnsmasq
Jan 26 16:22:21 compute-0 groupadd[131049]: new group: name=dnsmasq, GID=993
Jan 26 16:22:21 compute-0 useradd[131056]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 26 16:22:22 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:22:22 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 26 16:22:22 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 26 16:22:22 compute-0 groupadd[131069]: group added to /etc/group: name=clevis, GID=992
Jan 26 16:22:22 compute-0 groupadd[131069]: group added to /etc/gshadow: name=clevis
Jan 26 16:22:22 compute-0 groupadd[131069]: new group: name=clevis, GID=992
Jan 26 16:22:23 compute-0 useradd[131076]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 26 16:22:23 compute-0 usermod[131086]: add 'clevis' to group 'tss'
Jan 26 16:22:23 compute-0 usermod[131086]: add 'clevis' to shadow group 'tss'
Jan 26 16:22:25 compute-0 polkitd[43693]: Reloading rules
Jan 26 16:22:25 compute-0 polkitd[43693]: Collecting garbage unconditionally...
Jan 26 16:22:25 compute-0 polkitd[43693]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 16:22:25 compute-0 polkitd[43693]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 16:22:25 compute-0 polkitd[43693]: Finished loading, compiling and executing 3 rules
Jan 26 16:22:25 compute-0 polkitd[43693]: Reloading rules
Jan 26 16:22:25 compute-0 polkitd[43693]: Collecting garbage unconditionally...
Jan 26 16:22:25 compute-0 polkitd[43693]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 16:22:25 compute-0 polkitd[43693]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 16:22:25 compute-0 polkitd[43693]: Finished loading, compiling and executing 3 rules
Jan 26 16:22:26 compute-0 groupadd[131276]: group added to /etc/group: name=ceph, GID=167
Jan 26 16:22:26 compute-0 groupadd[131276]: group added to /etc/gshadow: name=ceph
Jan 26 16:22:26 compute-0 groupadd[131276]: new group: name=ceph, GID=167
Jan 26 16:22:26 compute-0 useradd[131282]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 26 16:22:29 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 26 16:22:29 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 26 16:22:29 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 26 16:22:29 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 26 16:22:29 compute-0 systemd[1]: sshd.service: Consumed 2.125s CPU time, read 564.0K from disk, written 8.0K to disk.
Jan 26 16:22:29 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 26 16:22:29 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 26 16:22:29 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 16:22:29 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 16:22:29 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 16:22:29 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 26 16:22:29 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 26 16:22:29 compute-0 sshd[131801]: Server listening on 0.0.0.0 port 22.
Jan 26 16:22:29 compute-0 sshd[131801]: Server listening on :: port 22.
Jan 26 16:22:29 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 26 16:22:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:22:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:22:31 compute-0 systemd[1]: Reloading.
Jan 26 16:22:32 compute-0 systemd-rc-local-generator[132055]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:32 compute-0 systemd-sysv-generator[132061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:22:34 compute-0 podman[134286]: 2026-01-26 16:22:34.22073642 +0000 UTC m=+0.089453367 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:22:35 compute-0 sudo[113825]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:36 compute-0 sudo[137209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smllthdlnnabmtnwionsujaoghgaawtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444555.898335-331-237845644829623/AnsiballZ_systemd.py'
Jan 26 16:22:36 compute-0 sudo[137209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:37 compute-0 python3.9[137237]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:22:37 compute-0 systemd[1]: Reloading.
Jan 26 16:22:37 compute-0 systemd-sysv-generator[137749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:37 compute-0 systemd-rc-local-generator[137745]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:37 compute-0 sudo[137209]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:37 compute-0 podman[137926]: 2026-01-26 16:22:37.478921723 +0000 UTC m=+0.091894665 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:22:37 compute-0 sudo[138523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrfpehebfuoovidryyhiadtillghgzeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444557.5356922-331-30852921229802/AnsiballZ_systemd.py'
Jan 26 16:22:37 compute-0 sudo[138523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:38 compute-0 python3.9[138549]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:22:38 compute-0 systemd[1]: Reloading.
Jan 26 16:22:38 compute-0 systemd-rc-local-generator[139055]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:38 compute-0 systemd-sysv-generator[139058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:38 compute-0 sudo[138523]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:38 compute-0 sudo[139913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tepdsxqhkwghjnamqnineghhkzrfvleo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444558.5916872-331-126665188079906/AnsiballZ_systemd.py'
Jan 26 16:22:38 compute-0 sudo[139913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:39 compute-0 python3.9[139934]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:22:39 compute-0 systemd[1]: Reloading.
Jan 26 16:22:39 compute-0 systemd-sysv-generator[140357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:39 compute-0 systemd-rc-local-generator[140354]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:39 compute-0 sudo[139913]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:40 compute-0 sudo[141111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ousxakclaylxenleiysrkpbpwucarzra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444559.713013-331-261455312866496/AnsiballZ_systemd.py'
Jan 26 16:22:40 compute-0 sudo[141111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:22:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:22:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.573s CPU time.
Jan 26 16:22:40 compute-0 systemd[1]: run-re951a738809c4ef2b400fc762703e59c.service: Deactivated successfully.
Jan 26 16:22:40 compute-0 python3.9[141126]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:22:40 compute-0 systemd[1]: Reloading.
Jan 26 16:22:40 compute-0 systemd-rc-local-generator[141232]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:40 compute-0 systemd-sysv-generator[141235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:40 compute-0 sudo[141111]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:41 compute-0 sudo[141390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atghbpczoeuxmmyrswcmtsejubscytxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444560.978819-360-53462038585250/AnsiballZ_systemd.py'
Jan 26 16:22:41 compute-0 sudo[141390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:41 compute-0 python3.9[141392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:41 compute-0 systemd[1]: Reloading.
Jan 26 16:22:41 compute-0 systemd-rc-local-generator[141423]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:41 compute-0 systemd-sysv-generator[141427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:42 compute-0 sudo[141390]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:42 compute-0 sudo[141580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeyzzwmcjuuviyifamtvunfdghqfjjna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444562.1568239-360-78272630573465/AnsiballZ_systemd.py'
Jan 26 16:22:42 compute-0 sudo[141580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:42 compute-0 python3.9[141582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:42 compute-0 systemd[1]: Reloading.
Jan 26 16:22:42 compute-0 systemd-rc-local-generator[141614]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:42 compute-0 systemd-sysv-generator[141617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:43 compute-0 sudo[141580]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:43 compute-0 sudo[141770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhwzuzwqprusqedskirrloounahzkrta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444563.267688-360-81130340347336/AnsiballZ_systemd.py'
Jan 26 16:22:43 compute-0 sudo[141770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:43 compute-0 python3.9[141772]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:43 compute-0 systemd[1]: Reloading.
Jan 26 16:22:43 compute-0 systemd-rc-local-generator[141805]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:43 compute-0 systemd-sysv-generator[141809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:44 compute-0 sudo[141770]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:44 compute-0 sudo[141961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giadvwzzdpqjsrdwerpbullgruwrwoyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444564.3183093-360-30898608539036/AnsiballZ_systemd.py'
Jan 26 16:22:44 compute-0 sudo[141961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:44 compute-0 python3.9[141963]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:45 compute-0 sudo[141961]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:45 compute-0 sudo[142116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvphonwsctuouvhxlcfejxshfafubbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444565.2446928-360-278886541499866/AnsiballZ_systemd.py'
Jan 26 16:22:45 compute-0 sudo[142116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:45 compute-0 python3.9[142118]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:45 compute-0 systemd[1]: Reloading.
Jan 26 16:22:45 compute-0 systemd-rc-local-generator[142148]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:45 compute-0 systemd-sysv-generator[142153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:46 compute-0 sudo[142116]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:46 compute-0 sudo[142307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atgffnctyyvoyfanloelayduzcadyaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444566.483966-396-218768713293868/AnsiballZ_systemd.py'
Jan 26 16:22:46 compute-0 sudo[142307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:47 compute-0 python3.9[142309]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 16:22:47 compute-0 systemd[1]: Reloading.
Jan 26 16:22:47 compute-0 systemd-rc-local-generator[142336]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:22:47 compute-0 systemd-sysv-generator[142342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:22:47 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 26 16:22:47 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 26 16:22:47 compute-0 sudo[142307]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:47 compute-0 sudo[142501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fduugdyapdjxlaolgokmzcieiiouajdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444567.6577833-404-86072955062862/AnsiballZ_systemd.py'
Jan 26 16:22:47 compute-0 sudo[142501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:48 compute-0 python3.9[142503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:48 compute-0 sudo[142501]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:48 compute-0 sudo[142656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxvgequyxkugqmxlbirbkmjjuycdnxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444568.4382732-404-149051110002638/AnsiballZ_systemd.py'
Jan 26 16:22:48 compute-0 sudo[142656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:49 compute-0 python3.9[142658]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:49 compute-0 sudo[142656]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:49 compute-0 sudo[142811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzmbzlcuovqnvbhayuccqpdbzkamprkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444569.2649415-404-61142095070105/AnsiballZ_systemd.py'
Jan 26 16:22:49 compute-0 sudo[142811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:49 compute-0 python3.9[142813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:49 compute-0 sudo[142811]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:50 compute-0 sudo[142966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlcepdjmdkhundymzvmkbeibaryenuss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444570.0630643-404-224740688938679/AnsiballZ_systemd.py'
Jan 26 16:22:50 compute-0 sudo[142966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:50 compute-0 python3.9[142968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:50 compute-0 sudo[142966]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:51 compute-0 sudo[143121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrwjqiixwasxfvawmlsbqceqqjjuhkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444570.8664758-404-184988512547993/AnsiballZ_systemd.py'
Jan 26 16:22:51 compute-0 sudo[143121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:51 compute-0 python3.9[143123]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:51 compute-0 sudo[143121]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:51 compute-0 sudo[143276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijljexrbfktzdchhitniwxcoeuwbnicy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444571.6136787-404-145497552252211/AnsiballZ_systemd.py'
Jan 26 16:22:51 compute-0 sudo[143276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:52 compute-0 python3.9[143278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:52 compute-0 sudo[143276]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:52 compute-0 sudo[143431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwutzljnbkqypxvhbafgyiseurpyuuvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444572.4409974-404-39504612838673/AnsiballZ_systemd.py'
Jan 26 16:22:52 compute-0 sudo[143431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:53 compute-0 python3.9[143433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:53 compute-0 sudo[143431]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:53 compute-0 sudo[143586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpxnwyslrmyrjxfnegkrpxkykgsspyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444573.3824935-404-119091877762812/AnsiballZ_systemd.py'
Jan 26 16:22:53 compute-0 sudo[143586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:54 compute-0 python3.9[143588]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:54 compute-0 sudo[143586]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:54 compute-0 sudo[143741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmoujaxuzdscbjdlbesbbevptluqvuhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444574.2500975-404-197300316246638/AnsiballZ_systemd.py'
Jan 26 16:22:54 compute-0 sudo[143741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:54 compute-0 python3.9[143743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:54 compute-0 sudo[143741]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:55 compute-0 sudo[143896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybfrlkvtwslpreosxosgoloseweqlkmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444575.1795008-404-141561667163921/AnsiballZ_systemd.py'
Jan 26 16:22:55 compute-0 sudo[143896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:55 compute-0 python3.9[143898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:55 compute-0 sudo[143896]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:56 compute-0 sudo[144051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozumsmnkblvylcqumxmcraxeoooozyim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444576.012884-404-87417320125120/AnsiballZ_systemd.py'
Jan 26 16:22:56 compute-0 sudo[144051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:56 compute-0 python3.9[144053]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:56 compute-0 sudo[144051]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:57 compute-0 sudo[144206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnkjrnpbyvemprsbrgndvsiwuixohwrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444576.8338137-404-23540792428598/AnsiballZ_systemd.py'
Jan 26 16:22:57 compute-0 sudo[144206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:57 compute-0 python3.9[144208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:57 compute-0 sudo[144206]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:58 compute-0 sudo[144361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxzyaehwubxxvpxcawufejqtdztamqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444577.7576704-404-6923410726130/AnsiballZ_systemd.py'
Jan 26 16:22:58 compute-0 sudo[144361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:58 compute-0 python3.9[144363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:58 compute-0 sudo[144361]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:59 compute-0 sudo[144516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfpwogpyzjqsuzjozhjwxsznklahpjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444578.684343-404-256758184959395/AnsiballZ_systemd.py'
Jan 26 16:22:59 compute-0 sudo[144516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:22:59 compute-0 python3.9[144518]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 16:22:59 compute-0 sudo[144516]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:00 compute-0 sudo[144671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgcpxvnipigqbmdnfshxqwwviswvhim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444580.0170498-506-252385732701512/AnsiballZ_file.py'
Jan 26 16:23:00 compute-0 sudo[144671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:00 compute-0 python3.9[144673]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:00 compute-0 sudo[144671]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:01 compute-0 sudo[144825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffzdclchdkzcmmzpoxkfxlsvymsveqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444580.800424-506-172050364453966/AnsiballZ_file.py'
Jan 26 16:23:01 compute-0 sudo[144825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:01 compute-0 python3.9[144827]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:01 compute-0 sudo[144825]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:23:01.697 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:23:01.698 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:23:01.699 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:01 compute-0 sudo[144977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eholmejrxqyuliufjodseozrvlmzkzge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444581.501669-506-150588081883924/AnsiballZ_file.py'
Jan 26 16:23:01 compute-0 sudo[144977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:01 compute-0 python3.9[144979]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:01 compute-0 sudo[144977]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:02 compute-0 sudo[145129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltwlvzxudxtlqqccsieineoqdnyhxfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444582.1396058-506-271178782529182/AnsiballZ_file.py'
Jan 26 16:23:02 compute-0 sudo[145129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:02 compute-0 sshd-session[144674]: Connection reset by authenticating user root 176.120.22.13 port 42220 [preauth]
Jan 26 16:23:02 compute-0 python3.9[145131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:02 compute-0 sudo[145129]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:03 compute-0 sudo[145282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndbrpaenyyycutfoykjuaktlxzedbtqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444582.801697-506-49258016167021/AnsiballZ_file.py'
Jan 26 16:23:03 compute-0 sudo[145282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:03 compute-0 python3.9[145284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:03 compute-0 sudo[145282]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:03 compute-0 sudo[145435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbkonibgbsqozanbbkqjcbmcnwhthlop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444583.4339294-506-160726678705986/AnsiballZ_file.py'
Jan 26 16:23:03 compute-0 sudo[145435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:03 compute-0 python3.9[145437]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:23:03 compute-0 sudo[145435]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:04 compute-0 podman[145561]: 2026-01-26 16:23:04.478718959 +0000 UTC m=+0.058443628 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 16:23:04 compute-0 python3.9[145600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:23:05 compute-0 sshd-session[145165]: Invalid user vpn from 176.120.22.13 port 49696
Jan 26 16:23:05 compute-0 sudo[145757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcqnkanhawfynxxapvajyfqfejnncdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444584.9222522-557-234364407730446/AnsiballZ_stat.py'
Jan 26 16:23:05 compute-0 sudo[145757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:05 compute-0 python3.9[145759]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:05 compute-0 sshd-session[145165]: Connection reset by invalid user vpn 176.120.22.13 port 49696 [preauth]
Jan 26 16:23:05 compute-0 sudo[145757]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:06 compute-0 sudo[145884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdviufponvyfegjwhvbwyghtwzofhsfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444584.9222522-557-234364407730446/AnsiballZ_copy.py'
Jan 26 16:23:06 compute-0 sudo[145884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:06 compute-0 python3.9[145886]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444584.9222522-557-234364407730446/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:06 compute-0 sudo[145884]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:06 compute-0 sudo[146036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjhvkxaecrkbjzxmynvbdmfyaozlbng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444586.6391137-557-151408799182714/AnsiballZ_stat.py'
Jan 26 16:23:06 compute-0 sudo[146036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:07 compute-0 python3.9[146038]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:07 compute-0 sudo[146036]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:07 compute-0 sshd-session[145785]: Invalid user ubnt from 176.120.22.13 port 49704
Jan 26 16:23:07 compute-0 sudo[146171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmjwwrdngqhatdkfynysjlpauxvmgimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444586.6391137-557-151408799182714/AnsiballZ_copy.py'
Jan 26 16:23:07 compute-0 sudo[146171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:07 compute-0 podman[146135]: 2026-01-26 16:23:07.753270409 +0000 UTC m=+0.172656916 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:23:07 compute-0 python3.9[146183]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444586.6391137-557-151408799182714/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:07 compute-0 sshd-session[145785]: Connection reset by invalid user ubnt 176.120.22.13 port 49704 [preauth]
Jan 26 16:23:07 compute-0 sudo[146171]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:08 compute-0 sudo[146341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rttgtiwqcrtgpezcazuqfqpwssuhdmwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444588.0232503-557-123057122134383/AnsiballZ_stat.py'
Jan 26 16:23:08 compute-0 sudo[146341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:08 compute-0 python3.9[146343]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:08 compute-0 sudo[146341]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:08 compute-0 sudo[146466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtlikwcjtakyvsgxmgpzuipqupjnzliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444588.0232503-557-123057122134383/AnsiballZ_copy.py'
Jan 26 16:23:08 compute-0 sudo[146466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:09 compute-0 python3.9[146468]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444588.0232503-557-123057122134383/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:09 compute-0 sudo[146466]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:09 compute-0 sudo[146618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugbdbekqexekeaimmjkcdmevbaynvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444589.3710423-557-230834440748986/AnsiballZ_stat.py'
Jan 26 16:23:09 compute-0 sudo[146618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:09 compute-0 python3.9[146620]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:09 compute-0 sudo[146618]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:10 compute-0 sudo[146743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhyidlmbqchwbvnnqdyrbxcpnqcunud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444589.3710423-557-230834440748986/AnsiballZ_copy.py'
Jan 26 16:23:10 compute-0 sudo[146743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:10 compute-0 sshd-session[146266]: Connection reset by authenticating user root 176.120.22.13 port 49724 [preauth]
Jan 26 16:23:10 compute-0 python3.9[146745]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444589.3710423-557-230834440748986/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:10 compute-0 sudo[146743]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:11 compute-0 sudo[146896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afaprrxwuezyyzzxogukpicwcvhqujyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444590.7600172-557-177731850879103/AnsiballZ_stat.py'
Jan 26 16:23:11 compute-0 sudo[146896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:11 compute-0 python3.9[146898]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:11 compute-0 sudo[146896]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:11 compute-0 sudo[147022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyorsnfqbxamuidcjkqiadxluieyjlnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444590.7600172-557-177731850879103/AnsiballZ_copy.py'
Jan 26 16:23:11 compute-0 sudo[147022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:11 compute-0 python3.9[147024]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444590.7600172-557-177731850879103/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:11 compute-0 sudo[147022]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:12 compute-0 sudo[147174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwosndgtehnqabmdiqwxommwxsdnjver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444591.940865-557-20748248911724/AnsiballZ_stat.py'
Jan 26 16:23:12 compute-0 sudo[147174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:12 compute-0 python3.9[147176]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:12 compute-0 sudo[147174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:12 compute-0 sudo[147299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxhqdocoubyjctkwayqzqdrlzllusvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444591.940865-557-20748248911724/AnsiballZ_copy.py'
Jan 26 16:23:12 compute-0 sudo[147299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:13 compute-0 python3.9[147301]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444591.940865-557-20748248911724/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:13 compute-0 sudo[147299]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:13 compute-0 sshd-session[146793]: Connection reset by authenticating user root 176.120.22.13 port 49738 [preauth]
Jan 26 16:23:13 compute-0 sudo[147451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbimufzccvrdmidjvzhauorgdqorpstp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444593.25902-557-46374835268434/AnsiballZ_stat.py'
Jan 26 16:23:13 compute-0 sudo[147451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:13 compute-0 python3.9[147453]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:13 compute-0 sudo[147451]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:14 compute-0 sudo[147574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aputrasoaonszsgosaqwahnhfianislp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444593.25902-557-46374835268434/AnsiballZ_copy.py'
Jan 26 16:23:14 compute-0 sudo[147574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:14 compute-0 python3.9[147576]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444593.25902-557-46374835268434/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:14 compute-0 sudo[147574]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:14 compute-0 sudo[147726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kippgauranwtyqtynkgznwkrgmtymldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444594.5235102-557-63257949806740/AnsiballZ_stat.py'
Jan 26 16:23:14 compute-0 sudo[147726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:14 compute-0 python3.9[147728]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:15 compute-0 sudo[147726]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:15 compute-0 sudo[147851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbvsjxptzkvecpvoflijcxeqnhppokke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444594.5235102-557-63257949806740/AnsiballZ_copy.py'
Jan 26 16:23:15 compute-0 sudo[147851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:15 compute-0 python3.9[147853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769444594.5235102-557-63257949806740/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:15 compute-0 sudo[147851]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:16 compute-0 sudo[148003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodmsabgvuqwnmpcrjqpmgsjsaymifta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444595.7543242-670-213105758411614/AnsiballZ_command.py'
Jan 26 16:23:16 compute-0 sudo[148003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:16 compute-0 python3.9[148005]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 26 16:23:16 compute-0 sudo[148003]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:16 compute-0 sudo[148156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biyajpvnpdpxagcoyzfzuufeqpaqediz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444596.5333319-679-114842572514192/AnsiballZ_file.py'
Jan 26 16:23:16 compute-0 sudo[148156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:17 compute-0 python3.9[148158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:17 compute-0 sudo[148156]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:17 compute-0 sudo[148308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmnqmejqsqofehzseyxuqvpxgzhmqns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444597.2292228-679-278723186931061/AnsiballZ_file.py'
Jan 26 16:23:17 compute-0 sudo[148308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:17 compute-0 python3.9[148310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:17 compute-0 sudo[148308]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:18 compute-0 sudo[148460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfmdsueraawxsvsqbpsopmunjqvjowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444597.8572962-679-76223866898940/AnsiballZ_file.py'
Jan 26 16:23:18 compute-0 sudo[148460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:18 compute-0 python3.9[148462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:18 compute-0 sudo[148460]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:18 compute-0 sudo[148612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrcepuncxemscssiwakzxobsplfjomx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444598.521454-679-113603070066354/AnsiballZ_file.py'
Jan 26 16:23:18 compute-0 sudo[148612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:19 compute-0 python3.9[148614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:19 compute-0 sudo[148612]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:19 compute-0 sudo[148764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmxzaxxqxjuhvebildgajrflqcykrtqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444599.2584841-679-2151843799908/AnsiballZ_file.py'
Jan 26 16:23:19 compute-0 sudo[148764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:19 compute-0 python3.9[148766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:19 compute-0 sudo[148764]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:20 compute-0 sudo[148916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppiguudasqvbioiciyxrqbfpexnxunvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444599.909674-679-43153191632086/AnsiballZ_file.py'
Jan 26 16:23:20 compute-0 sudo[148916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:20 compute-0 python3.9[148918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:20 compute-0 sudo[148916]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:22 compute-0 sudo[149068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpigsdhfducrglmkbxiuinydrlqunase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444602.0892873-679-181804765001369/AnsiballZ_file.py'
Jan 26 16:23:22 compute-0 sudo[149068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:22 compute-0 python3.9[149070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:22 compute-0 sudo[149068]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:23 compute-0 sudo[149220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfrenywkvlxkmkiupzjpcmxydnkkgtmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444602.8223102-679-249255239593563/AnsiballZ_file.py'
Jan 26 16:23:23 compute-0 sudo[149220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:23 compute-0 python3.9[149222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:23 compute-0 sudo[149220]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:23 compute-0 sudo[149372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bollpuedhjhigwpprwhiacmhhbcbnzlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444603.4729867-679-137357749204038/AnsiballZ_file.py'
Jan 26 16:23:23 compute-0 sudo[149372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:23 compute-0 python3.9[149374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:23 compute-0 sudo[149372]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:24 compute-0 sudo[149524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfgrxglrgxbpchguzauogiaxdcevpvhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444604.1199243-679-128553384244751/AnsiballZ_file.py'
Jan 26 16:23:24 compute-0 sudo[149524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:24 compute-0 python3.9[149526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:24 compute-0 sudo[149524]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:25 compute-0 sudo[149676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzqcfavaeqkpjqszemztgftmvwoibta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444604.771191-679-111691645786919/AnsiballZ_file.py'
Jan 26 16:23:25 compute-0 sudo[149676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:25 compute-0 python3.9[149678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:25 compute-0 sudo[149676]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:25 compute-0 sudo[149828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foskqajwmkxtaonfvvyjjksiywphaykk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444605.4357226-679-43792946141792/AnsiballZ_file.py'
Jan 26 16:23:25 compute-0 sudo[149828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:25 compute-0 python3.9[149830]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:26 compute-0 sudo[149828]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:26 compute-0 sudo[149980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjofilluinlnbycyudckyfylkpwlrbee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444606.1564105-679-96508051116093/AnsiballZ_file.py'
Jan 26 16:23:26 compute-0 sudo[149980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:26 compute-0 python3.9[149982]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:26 compute-0 sudo[149980]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:27 compute-0 sudo[150132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mebixftxoltakkwhinibvcucvzlwjqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444606.877137-679-52620239176156/AnsiballZ_file.py'
Jan 26 16:23:27 compute-0 sudo[150132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:27 compute-0 python3.9[150134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:27 compute-0 sudo[150132]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:27 compute-0 sudo[150284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chsnjgaywgiryfutqvkappfiffrcdgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444607.611457-778-125972053105210/AnsiballZ_stat.py'
Jan 26 16:23:27 compute-0 sudo[150284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:28 compute-0 python3.9[150286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:28 compute-0 sudo[150284]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:28 compute-0 sudo[150407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjknirkafftpqkvldaruizyyaufduymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444607.611457-778-125972053105210/AnsiballZ_copy.py'
Jan 26 16:23:28 compute-0 sudo[150407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:28 compute-0 python3.9[150409]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444607.611457-778-125972053105210/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:28 compute-0 sudo[150407]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:29 compute-0 sudo[150559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzutsbzurxiutxrjirvpgwcbfdijfodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444608.9437652-778-64116725808450/AnsiballZ_stat.py'
Jan 26 16:23:29 compute-0 sudo[150559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:29 compute-0 python3.9[150561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:29 compute-0 sudo[150559]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:29 compute-0 sudo[150682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzddhvyrpjotysbitvdojpvzjanlupbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444608.9437652-778-64116725808450/AnsiballZ_copy.py'
Jan 26 16:23:29 compute-0 sudo[150682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:30 compute-0 python3.9[150684]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444608.9437652-778-64116725808450/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:30 compute-0 sudo[150682]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:30 compute-0 sudo[150834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqxawxkveourevyqemucqgodfwbtnqag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444610.3923278-778-84463646048361/AnsiballZ_stat.py'
Jan 26 16:23:30 compute-0 sudo[150834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:30 compute-0 python3.9[150836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:30 compute-0 sudo[150834]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:31 compute-0 sudo[150957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crxpjslupakpakkhejnotpcbdzjxaqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444610.3923278-778-84463646048361/AnsiballZ_copy.py'
Jan 26 16:23:31 compute-0 sudo[150957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:31 compute-0 python3.9[150959]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444610.3923278-778-84463646048361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:31 compute-0 sudo[150957]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:32 compute-0 sudo[151109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slmawowoyozlptgysqnfnfsremnvovnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444611.7104025-778-12960001536537/AnsiballZ_stat.py'
Jan 26 16:23:32 compute-0 sudo[151109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:32 compute-0 python3.9[151111]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:32 compute-0 sudo[151109]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:32 compute-0 sudo[151232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhtjpxzhiujpviwiwyfjzuxgforynuoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444611.7104025-778-12960001536537/AnsiballZ_copy.py'
Jan 26 16:23:32 compute-0 sudo[151232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:32 compute-0 python3.9[151234]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444611.7104025-778-12960001536537/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:32 compute-0 sudo[151232]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:33 compute-0 sudo[151384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kylfaekogmuyfzxbdregtzfopyawzjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444612.9162793-778-35854185993813/AnsiballZ_stat.py'
Jan 26 16:23:33 compute-0 sudo[151384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:33 compute-0 python3.9[151386]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:33 compute-0 sudo[151384]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:33 compute-0 sudo[151507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujwldukqfxfeoyzhqagouehyhzvuwugs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444612.9162793-778-35854185993813/AnsiballZ_copy.py'
Jan 26 16:23:33 compute-0 sudo[151507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:34 compute-0 python3.9[151509]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444612.9162793-778-35854185993813/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:34 compute-0 sudo[151507]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:34 compute-0 sudo[151659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivluoiycpronpjdubgokywlwprdtyacv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444614.2184882-778-100378592971761/AnsiballZ_stat.py'
Jan 26 16:23:34 compute-0 sudo[151659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:34 compute-0 podman[151661]: 2026-01-26 16:23:34.611874986 +0000 UTC m=+0.057017357 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:34 compute-0 python3.9[151662]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:34 compute-0 sudo[151659]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:35 compute-0 sudo[151802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbtieauuyzcsxrgprphytsxeobnyxcmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444614.2184882-778-100378592971761/AnsiballZ_copy.py'
Jan 26 16:23:35 compute-0 sudo[151802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:35 compute-0 python3.9[151804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444614.2184882-778-100378592971761/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:35 compute-0 sudo[151802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:35 compute-0 sudo[151954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvazjkbmegnthaskzvmsravlupfisofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444615.4882503-778-254258523166160/AnsiballZ_stat.py'
Jan 26 16:23:35 compute-0 sudo[151954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:35 compute-0 python3.9[151956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:35 compute-0 sudo[151954]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:36 compute-0 sudo[152077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtjsjrbpnoqttsgijonnxqivaktxpyhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444615.4882503-778-254258523166160/AnsiballZ_copy.py'
Jan 26 16:23:36 compute-0 sudo[152077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:36 compute-0 python3.9[152079]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444615.4882503-778-254258523166160/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:36 compute-0 sudo[152077]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:36 compute-0 sudo[152229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubthkzqygxgjcppfhqjfetmsyvomndyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444616.6307905-778-252719966709523/AnsiballZ_stat.py'
Jan 26 16:23:36 compute-0 sudo[152229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:37 compute-0 python3.9[152231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:37 compute-0 sudo[152229]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:37 compute-0 sudo[152352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvjgpxsyvfffqfanfiqocbmazfkiqylx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444616.6307905-778-252719966709523/AnsiballZ_copy.py'
Jan 26 16:23:37 compute-0 sudo[152352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:37 compute-0 python3.9[152354]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444616.6307905-778-252719966709523/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:37 compute-0 sudo[152352]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:38 compute-0 podman[152456]: 2026-01-26 16:23:38.187127149 +0000 UTC m=+0.080801688 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:23:38 compute-0 sudo[152530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eztgcwjwqadgadweqmmhycfaufpfimxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444617.8984098-778-281025448933904/AnsiballZ_stat.py'
Jan 26 16:23:38 compute-0 sudo[152530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:38 compute-0 python3.9[152533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:38 compute-0 sudo[152530]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:38 compute-0 sudo[152654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqvxtquvwesfoqkbqjdgjnudzlxhhkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444617.8984098-778-281025448933904/AnsiballZ_copy.py'
Jan 26 16:23:38 compute-0 sudo[152654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:38 compute-0 python3.9[152656]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444617.8984098-778-281025448933904/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:38 compute-0 sudo[152654]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:39 compute-0 sudo[152806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjkjwmykgurkhghuewyvriqgsyavuvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444619.1260056-778-200285980197197/AnsiballZ_stat.py'
Jan 26 16:23:39 compute-0 sudo[152806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:39 compute-0 python3.9[152808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:39 compute-0 sudo[152806]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:40 compute-0 sudo[152929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmhanxdfvuwvunalrfhhrzrnnojdsjuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444619.1260056-778-200285980197197/AnsiballZ_copy.py'
Jan 26 16:23:40 compute-0 sudo[152929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:40 compute-0 python3.9[152931]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444619.1260056-778-200285980197197/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:40 compute-0 sudo[152929]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:40 compute-0 sudo[153081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwaqudihuglhdaokealexfvdbkyirgqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444620.3793223-778-193225250990935/AnsiballZ_stat.py'
Jan 26 16:23:40 compute-0 sudo[153081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:40 compute-0 python3.9[153083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:40 compute-0 sudo[153081]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:41 compute-0 sudo[153204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgyejphbvwtvalfzednldefjhocwliqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444620.3793223-778-193225250990935/AnsiballZ_copy.py'
Jan 26 16:23:41 compute-0 sudo[153204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:41 compute-0 python3.9[153206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444620.3793223-778-193225250990935/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:41 compute-0 sudo[153204]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:41 compute-0 sudo[153356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmtsxmbgifxoujdqcetttosmjvimvtdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444621.5402977-778-21268778466707/AnsiballZ_stat.py'
Jan 26 16:23:41 compute-0 sudo[153356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:42 compute-0 python3.9[153358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:42 compute-0 sudo[153356]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:42 compute-0 sudo[153479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfmkniaihftxstcdfaoclzwyvqcmnoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444621.5402977-778-21268778466707/AnsiballZ_copy.py'
Jan 26 16:23:42 compute-0 sudo[153479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:42 compute-0 python3.9[153481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444621.5402977-778-21268778466707/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:42 compute-0 sudo[153479]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:43 compute-0 sudo[153631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthuawjuhveagruyhfksycxfdotjcyre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444622.7766032-778-47476097076281/AnsiballZ_stat.py'
Jan 26 16:23:43 compute-0 sudo[153631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:43 compute-0 python3.9[153633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:43 compute-0 sudo[153631]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:43 compute-0 sudo[153754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flchoukhqueltoiwrewqvalrfxkqspga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444622.7766032-778-47476097076281/AnsiballZ_copy.py'
Jan 26 16:23:43 compute-0 sudo[153754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:43 compute-0 python3.9[153756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444622.7766032-778-47476097076281/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:43 compute-0 sudo[153754]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:44 compute-0 sudo[153906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxezaqazslueerrguziivwzzjhwbwjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444624.118717-778-53721102586828/AnsiballZ_stat.py'
Jan 26 16:23:44 compute-0 sudo[153906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:44 compute-0 python3.9[153908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:23:44 compute-0 sudo[153906]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:45 compute-0 sudo[154029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotinbffrvztenwtcjjozcpnzyeunndd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444624.118717-778-53721102586828/AnsiballZ_copy.py'
Jan 26 16:23:45 compute-0 sudo[154029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:45 compute-0 python3.9[154031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444624.118717-778-53721102586828/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:45 compute-0 sudo[154029]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:45 compute-0 python3.9[154181]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:23:46 compute-0 sudo[154334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjxfqwpwofjeripqszuwsbefefcwtaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444626.2521641-984-190531378239196/AnsiballZ_seboolean.py'
Jan 26 16:23:46 compute-0 sudo[154334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:46 compute-0 python3.9[154336]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 26 16:23:48 compute-0 sudo[154334]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:48 compute-0 sudo[154490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhccuxwlygzpshhjdqiaaxjnsjkznxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444628.3119247-992-40805859013220/AnsiballZ_copy.py'
Jan 26 16:23:48 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 26 16:23:48 compute-0 sudo[154490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:48 compute-0 python3.9[154492]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:48 compute-0 sudo[154490]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:49 compute-0 sudo[154642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtrhjdwkqedrizlmxhnaiifznmbbjtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444628.9523156-992-84313158183594/AnsiballZ_copy.py'
Jan 26 16:23:49 compute-0 sudo[154642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:49 compute-0 python3.9[154644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:49 compute-0 sudo[154642]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:49 compute-0 sudo[154794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvytfqehwopwgsghlfbihkoqrelvwewg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444629.56378-992-19890432051411/AnsiballZ_copy.py'
Jan 26 16:23:49 compute-0 sudo[154794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:50 compute-0 python3.9[154796]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:50 compute-0 sudo[154794]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:50 compute-0 sudo[154946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeltxhdbgquwlnfmlanbrqimmpvqtjsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444630.218093-992-62254935456917/AnsiballZ_copy.py'
Jan 26 16:23:50 compute-0 sudo[154946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:50 compute-0 python3.9[154948]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:50 compute-0 sudo[154946]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:51 compute-0 sudo[155098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfdeajjombaoajqghneuemtjqqbuencs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444630.9419165-992-134935501098273/AnsiballZ_copy.py'
Jan 26 16:23:51 compute-0 sudo[155098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:51 compute-0 python3.9[155100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:51 compute-0 sudo[155098]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:52 compute-0 sudo[155250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfizwrdigakqetqqfrshhevqihpgbwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444631.6761925-1028-221022264678379/AnsiballZ_copy.py'
Jan 26 16:23:52 compute-0 sudo[155250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:52 compute-0 python3.9[155252]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:52 compute-0 sudo[155250]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:52 compute-0 sudo[155402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobobdmhgoyqxdrexfsjaypwopgnzlbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444632.413284-1028-122797955646870/AnsiballZ_copy.py'
Jan 26 16:23:52 compute-0 sudo[155402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:52 compute-0 python3.9[155404]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:52 compute-0 sudo[155402]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:53 compute-0 sudo[155554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvbuovxcihmtpafpisgmhmjprplwhvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444633.059905-1028-78767313257994/AnsiballZ_copy.py'
Jan 26 16:23:53 compute-0 sudo[155554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:53 compute-0 python3.9[155556]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:53 compute-0 sudo[155554]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:54 compute-0 sudo[155706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttmakfkaofttcvwrrhrymocelzodqxhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444633.7221959-1028-71738045431404/AnsiballZ_copy.py'
Jan 26 16:23:54 compute-0 sudo[155706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:54 compute-0 python3.9[155708]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:54 compute-0 sudo[155706]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:54 compute-0 sudo[155858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjfbpzdpawwwksbbtlwdsvecujbelie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444634.4484553-1028-258278396703862/AnsiballZ_copy.py'
Jan 26 16:23:54 compute-0 sudo[155858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:54 compute-0 python3.9[155860]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:23:54 compute-0 sudo[155858]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:55 compute-0 sudo[156010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehbcfwkfafvghdwtubmjwsmkqvfibkrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444635.1846669-1064-19091180540543/AnsiballZ_systemd.py'
Jan 26 16:23:55 compute-0 sudo[156010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:55 compute-0 python3.9[156012]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:23:55 compute-0 systemd[1]: Reloading.
Jan 26 16:23:55 compute-0 systemd-sysv-generator[156042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:23:55 compute-0 systemd-rc-local-generator[156039]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:23:56 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 26 16:23:56 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 26 16:23:56 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 26 16:23:56 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 26 16:23:56 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 26 16:23:56 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 26 16:23:56 compute-0 sudo[156010]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:56 compute-0 sudo[156202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccrchyesxltauvbmcioqsoivaafrgiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444636.3992248-1064-99449696737564/AnsiballZ_systemd.py'
Jan 26 16:23:56 compute-0 sudo[156202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:56 compute-0 python3.9[156204]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:23:56 compute-0 systemd[1]: Reloading.
Jan 26 16:23:57 compute-0 systemd-sysv-generator[156233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:23:57 compute-0 systemd-rc-local-generator[156229]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:23:57 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 26 16:23:57 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 26 16:23:57 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 26 16:23:57 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 26 16:23:57 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 26 16:23:57 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 26 16:23:57 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 16:23:57 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 16:23:57 compute-0 sudo[156202]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:57 compute-0 sudo[156418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhzgevzrzsxykpbopktmjyqwbovbhfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444637.5362465-1064-83329635865032/AnsiballZ_systemd.py'
Jan 26 16:23:57 compute-0 sudo[156418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:58 compute-0 python3.9[156420]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:23:58 compute-0 systemd[1]: Reloading.
Jan 26 16:23:58 compute-0 systemd-rc-local-generator[156445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:23:58 compute-0 systemd-sysv-generator[156450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:23:58 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 26 16:23:58 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 26 16:23:58 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 26 16:23:58 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 26 16:23:58 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 26 16:23:58 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 16:23:58 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 16:23:58 compute-0 sudo[156418]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:58 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 26 16:23:58 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 26 16:23:58 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 26 16:23:58 compute-0 sudo[156636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bljlaaomzlzsovprvnwkahudicjqijgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444638.6860569-1064-157920263713203/AnsiballZ_systemd.py'
Jan 26 16:23:58 compute-0 sudo[156636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:23:59 compute-0 python3.9[156638]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:23:59 compute-0 systemd[1]: Reloading.
Jan 26 16:23:59 compute-0 systemd-sysv-generator[156667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:23:59 compute-0 systemd-rc-local-generator[156664]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:23:59 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 26 16:23:59 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 26 16:23:59 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 26 16:23:59 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 26 16:23:59 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 26 16:23:59 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 26 16:23:59 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 26 16:23:59 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 26 16:23:59 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 26 16:23:59 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 26 16:23:59 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 16:23:59 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 16:23:59 compute-0 sudo[156636]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:59 compute-0 setroubleshoot[156456]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l abf7436c-9077-43eb-8f93-32bbac70bd68
Jan 26 16:23:59 compute-0 setroubleshoot[156456]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 16:23:59 compute-0 setroubleshoot[156456]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l abf7436c-9077-43eb-8f93-32bbac70bd68
Jan 26 16:23:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:23:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:23:59 compute-0 setroubleshoot[156456]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 16:24:00 compute-0 sudo[156854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvjyljytlyrbogqbopgbhygvxssbodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444639.8710654-1064-40137176180790/AnsiballZ_systemd.py'
Jan 26 16:24:00 compute-0 sudo[156854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:00 compute-0 python3.9[156856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:24:00 compute-0 systemd[1]: Reloading.
Jan 26 16:24:00 compute-0 systemd-rc-local-generator[156885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:24:00 compute-0 systemd-sysv-generator[156889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:24:00 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 26 16:24:00 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 26 16:24:00 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 26 16:24:00 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 26 16:24:00 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 26 16:24:00 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 26 16:24:00 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 16:24:00 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 16:24:00 compute-0 sudo[156854]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:01 compute-0 sudo[157066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbjlkgszahxzdlrprqiahjzztdedjdwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444641.1006389-1101-102711499646733/AnsiballZ_file.py'
Jan 26 16:24:01 compute-0 sudo[157066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:01 compute-0 anacron[31011]: Job `cron.daily' started
Jan 26 16:24:01 compute-0 anacron[31011]: Job `cron.daily' terminated
Jan 26 16:24:01 compute-0 python3.9[157068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:01 compute-0 sudo[157066]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:24:01.698 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:24:01.699 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:24:01.699 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:02 compute-0 sudo[157220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydnhviovukifbdndnumicnuxfjslwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444641.766021-1109-59516758796506/AnsiballZ_find.py'
Jan 26 16:24:02 compute-0 sudo[157220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:02 compute-0 python3.9[157222]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:24:02 compute-0 sudo[157220]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:02 compute-0 sudo[157372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgtdfubmhccognzwadetctbrbfhjwube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444642.651264-1123-47512844977694/AnsiballZ_stat.py'
Jan 26 16:24:02 compute-0 sudo[157372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:03 compute-0 python3.9[157374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:03 compute-0 sudo[157372]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:03 compute-0 sudo[157495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvshxjjdlxziremiizetjcrtrwlwxgjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444642.651264-1123-47512844977694/AnsiballZ_copy.py'
Jan 26 16:24:03 compute-0 sudo[157495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:03 compute-0 python3.9[157497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444642.651264-1123-47512844977694/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:03 compute-0 sudo[157495]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:04 compute-0 sudo[157647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdpqtzkbnhhojtgekvrzpodfkdisqsdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444644.0604732-1139-205156408066774/AnsiballZ_file.py'
Jan 26 16:24:04 compute-0 sudo[157647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:04 compute-0 python3.9[157649]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:04 compute-0 sudo[157647]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:05 compute-0 sudo[157809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbnbrgcsdehbzeezghcfwswvfkomntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444644.7651634-1147-238499061334300/AnsiballZ_stat.py'
Jan 26 16:24:05 compute-0 sudo[157809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:05 compute-0 podman[157773]: 2026-01-26 16:24:05.158495272 +0000 UTC m=+0.098934230 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:24:05 compute-0 python3.9[157813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:05 compute-0 sudo[157809]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:05 compute-0 sudo[157895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqiblwmqchjjyrgetjvbgamjwudlqcbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444644.7651634-1147-238499061334300/AnsiballZ_file.py'
Jan 26 16:24:05 compute-0 sudo[157895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:05 compute-0 python3.9[157897]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:05 compute-0 sudo[157895]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:06 compute-0 sudo[158047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmeosxejsaxqdifwnktmexwlcydioljp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444646.0449746-1159-36053716495669/AnsiballZ_stat.py'
Jan 26 16:24:06 compute-0 sudo[158047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:06 compute-0 python3.9[158049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:06 compute-0 sudo[158047]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:06 compute-0 sudo[158125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkgmqxsjmmdbmdfkrlovjbfktzuhwhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444646.0449746-1159-36053716495669/AnsiballZ_file.py'
Jan 26 16:24:06 compute-0 sudo[158125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:06 compute-0 python3.9[158127]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.aakfa6xq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:07 compute-0 sudo[158125]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:07 compute-0 sudo[158277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygdrpviachquchemygmckoyzsjljuywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444647.196802-1171-92384977256470/AnsiballZ_stat.py'
Jan 26 16:24:07 compute-0 sudo[158277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:07 compute-0 python3.9[158279]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:07 compute-0 sudo[158277]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:08 compute-0 sudo[158355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolzqkuvpbgbkgezejngyjbyoiptnujk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444647.196802-1171-92384977256470/AnsiballZ_file.py'
Jan 26 16:24:08 compute-0 sudo[158355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:08 compute-0 python3.9[158357]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:08 compute-0 sudo[158355]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:08 compute-0 sudo[158524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgsumonyizjepyukryxailivcwoccjfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444648.5398047-1184-249558113620251/AnsiballZ_command.py'
Jan 26 16:24:08 compute-0 sudo[158524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:08 compute-0 podman[158481]: 2026-01-26 16:24:08.947467417 +0000 UTC m=+0.119353687 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:24:09 compute-0 python3.9[158530]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:24:09 compute-0 sudo[158524]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:09 compute-0 sudo[158688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwiqbthgvlyeirpxfabkhvnxsuhdgml ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444649.2981184-1192-99420574237063/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 16:24:09 compute-0 sudo[158688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:10 compute-0 python3[158690]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 16:24:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 26 16:24:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.003s CPU time.
Jan 26 16:24:10 compute-0 sudo[158688]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:10 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 26 16:24:10 compute-0 sudo[158840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzzhlfkdtphvqrjywkksiriksodfrbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444650.4352183-1200-48685623509171/AnsiballZ_stat.py'
Jan 26 16:24:10 compute-0 sudo[158840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:11 compute-0 python3.9[158842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:11 compute-0 sudo[158840]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:11 compute-0 sudo[158918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suippipducyirerqlnhxznoyrbqaobwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444650.4352183-1200-48685623509171/AnsiballZ_file.py'
Jan 26 16:24:11 compute-0 sudo[158918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:11 compute-0 python3.9[158920]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:11 compute-0 sudo[158918]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:12 compute-0 sudo[159070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxhzlkemfyihaztdgwnipditlcdilkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444651.751058-1212-210929903748524/AnsiballZ_stat.py'
Jan 26 16:24:12 compute-0 sudo[159070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:12 compute-0 python3.9[159072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:12 compute-0 sudo[159070]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:12 compute-0 sudo[159195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhqyrtlbuevmxintwotnysnuubtvses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444651.751058-1212-210929903748524/AnsiballZ_copy.py'
Jan 26 16:24:12 compute-0 sudo[159195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:13 compute-0 python3.9[159197]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444651.751058-1212-210929903748524/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:13 compute-0 sudo[159195]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:13 compute-0 sudo[159347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umqqersnzrepmjwxsdyjldegiimftash ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444653.28572-1227-225531655602933/AnsiballZ_stat.py'
Jan 26 16:24:13 compute-0 sudo[159347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:13 compute-0 python3.9[159349]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:13 compute-0 sudo[159347]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:14 compute-0 sudo[159425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyoydklgndxtjybyjmzmixleeqqcavdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444653.28572-1227-225531655602933/AnsiballZ_file.py'
Jan 26 16:24:14 compute-0 sudo[159425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:14 compute-0 python3.9[159427]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:14 compute-0 sudo[159425]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:14 compute-0 sudo[159577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcuaupusszebadcgxgyeubywzsuiaayi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444654.4648995-1239-187299230604128/AnsiballZ_stat.py'
Jan 26 16:24:14 compute-0 sudo[159577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:14 compute-0 python3.9[159579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:14 compute-0 sudo[159577]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:15 compute-0 sudo[159655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvxgilpthfhonjzwzgokrzxalsiycyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444654.4648995-1239-187299230604128/AnsiballZ_file.py'
Jan 26 16:24:15 compute-0 sudo[159655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:15 compute-0 python3.9[159657]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:15 compute-0 sudo[159655]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:16 compute-0 sudo[159807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xchsamenrhgeleetyshhbgftjkweyldm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444655.6158123-1251-191653542276086/AnsiballZ_stat.py'
Jan 26 16:24:16 compute-0 sudo[159807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:16 compute-0 python3.9[159809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:16 compute-0 sudo[159807]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:16 compute-0 sudo[159932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfahgrcirzykycwpgrtaffezccgfidnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444655.6158123-1251-191653542276086/AnsiballZ_copy.py'
Jan 26 16:24:16 compute-0 sudo[159932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:16 compute-0 python3.9[159934]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769444655.6158123-1251-191653542276086/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:16 compute-0 sudo[159932]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:17 compute-0 sudo[160084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkbfxwksxvdaifnbaltqoisepcszxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444657.1088219-1266-104982067844629/AnsiballZ_file.py'
Jan 26 16:24:17 compute-0 sudo[160084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:17 compute-0 python3.9[160086]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:17 compute-0 sudo[160084]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:18 compute-0 sudo[160236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjbkryvgyvtoinrchncpccgpieehdrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444657.7687411-1274-200563579749555/AnsiballZ_command.py'
Jan 26 16:24:18 compute-0 sudo[160236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:18 compute-0 python3.9[160238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:24:18 compute-0 sudo[160236]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:19 compute-0 sudo[160391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwayutbwlxrnsqgermvyvfdgmgipsvxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444658.7920444-1282-242535366703084/AnsiballZ_blockinfile.py'
Jan 26 16:24:19 compute-0 sudo[160391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:19 compute-0 python3.9[160393]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:19 compute-0 sudo[160391]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:20 compute-0 sudo[160543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmztaevmchpuztoitxphjxhxxrpgvtpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444659.8766391-1291-140245171896736/AnsiballZ_command.py'
Jan 26 16:24:20 compute-0 sudo[160543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:20 compute-0 python3.9[160545]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:24:20 compute-0 sudo[160543]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:20 compute-0 sudo[160696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrewwjvocpqxvnsposobukuhwrlqunbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444660.6574857-1299-6289861248631/AnsiballZ_stat.py'
Jan 26 16:24:20 compute-0 sudo[160696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:21 compute-0 python3.9[160698]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:24:21 compute-0 sudo[160696]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:21 compute-0 sudo[160850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmbpwjyowkkrspicpfvwkpvxvcleczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444661.34948-1307-72297013614556/AnsiballZ_command.py'
Jan 26 16:24:21 compute-0 sudo[160850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:21 compute-0 python3.9[160852]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:24:21 compute-0 sudo[160850]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:22 compute-0 sudo[161005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azxgnucblanjrlyhxswleqngcxsfbyhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444662.0579922-1315-37909905531217/AnsiballZ_file.py'
Jan 26 16:24:22 compute-0 sudo[161005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:22 compute-0 python3.9[161007]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:22 compute-0 sudo[161005]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:23 compute-0 sudo[161157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzrdniaagwhbfemytigseuxmbqkafqug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444662.7576232-1323-163923412625061/AnsiballZ_stat.py'
Jan 26 16:24:23 compute-0 sudo[161157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:23 compute-0 python3.9[161159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:23 compute-0 sudo[161157]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:24 compute-0 sudo[161280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtaptuqtwyvwqfjcrhhtepibikitluo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444662.7576232-1323-163923412625061/AnsiballZ_copy.py'
Jan 26 16:24:24 compute-0 sudo[161280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:24 compute-0 python3.9[161282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444662.7576232-1323-163923412625061/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:24 compute-0 sudo[161280]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:24 compute-0 sudo[161432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luzzcpuhgqotwuvvkdbuzwxmhsvnmrwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444664.3909416-1338-238483309759333/AnsiballZ_stat.py'
Jan 26 16:24:24 compute-0 sudo[161432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:25 compute-0 python3.9[161434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:25 compute-0 sudo[161432]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:25 compute-0 sudo[161555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgfpdmwcibumqhdkkmmtjjmyvvbotpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444664.3909416-1338-238483309759333/AnsiballZ_copy.py'
Jan 26 16:24:25 compute-0 sudo[161555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:25 compute-0 python3.9[161557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444664.3909416-1338-238483309759333/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:25 compute-0 sudo[161555]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:27 compute-0 sudo[161707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmmtftnommxtuwtnjprlrtyfuhtxwulx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444666.1754556-1353-82492781758536/AnsiballZ_stat.py'
Jan 26 16:24:27 compute-0 sudo[161707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:27 compute-0 python3.9[161709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:24:27 compute-0 sudo[161707]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:28 compute-0 sudo[161830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amltylbzcugqfkqwedpnzjgspdhrjaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444666.1754556-1353-82492781758536/AnsiballZ_copy.py'
Jan 26 16:24:28 compute-0 sudo[161830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:28 compute-0 python3.9[161832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444666.1754556-1353-82492781758536/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:24:28 compute-0 sudo[161830]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:28 compute-0 sudo[161982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwqsonuggdosyuldrifnilnieukjlokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444668.5840125-1368-69928454755409/AnsiballZ_systemd.py'
Jan 26 16:24:28 compute-0 sudo[161982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:29 compute-0 python3.9[161984]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:24:29 compute-0 systemd[1]: Reloading.
Jan 26 16:24:29 compute-0 systemd-rc-local-generator[162010]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:24:29 compute-0 systemd-sysv-generator[162016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:24:29 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 26 16:24:29 compute-0 sudo[161982]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:30 compute-0 sudo[162173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxuqqeaqpjpeglrpgcknefixxbeopmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444669.8354545-1376-29499128681537/AnsiballZ_systemd.py'
Jan 26 16:24:30 compute-0 sudo[162173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:30 compute-0 python3.9[162175]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 16:24:30 compute-0 systemd[1]: Reloading.
Jan 26 16:24:30 compute-0 systemd-sysv-generator[162204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:24:30 compute-0 systemd-rc-local-generator[162201]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:24:30 compute-0 systemd[1]: Reloading.
Jan 26 16:24:30 compute-0 systemd-rc-local-generator[162239]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:24:30 compute-0 systemd-sysv-generator[162242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:24:31 compute-0 sudo[162173]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:31 compute-0 sshd-session[107527]: Connection closed by 192.168.122.30 port 33072
Jan 26 16:24:31 compute-0 sshd-session[107524]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:24:31 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 26 16:24:31 compute-0 systemd[1]: session-23.scope: Consumed 3min 32.376s CPU time.
Jan 26 16:24:31 compute-0 systemd-logind[788]: Session 23 logged out. Waiting for processes to exit.
Jan 26 16:24:31 compute-0 systemd-logind[788]: Removed session 23.
Jan 26 16:24:36 compute-0 podman[162271]: 2026-01-26 16:24:36.221882106 +0000 UTC m=+0.088369202 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:24:39 compute-0 podman[162293]: 2026-01-26 16:24:39.227986015 +0000 UTC m=+0.108453949 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:24:45 compute-0 sshd-session[162320]: Accepted publickey for zuul from 192.168.122.30 port 47220 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:24:45 compute-0 systemd-logind[788]: New session 24 of user zuul.
Jan 26 16:24:45 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 26 16:24:45 compute-0 sshd-session[162320]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:24:46 compute-0 python3.9[162473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:24:48 compute-0 python3.9[162627]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:24:48 compute-0 network[162644]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:24:48 compute-0 network[162645]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:24:48 compute-0 network[162646]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:24:53 compute-0 sudo[162915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvswnhlqimkhzeriagsoxrppttekheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444692.691721-42-190332277577777/AnsiballZ_setup.py'
Jan 26 16:24:53 compute-0 sudo[162915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:53 compute-0 python3.9[162917]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:24:53 compute-0 sudo[162915]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:54 compute-0 sudo[162999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhoflhbxqouybzhnbypepuuugscsjbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444692.691721-42-190332277577777/AnsiballZ_dnf.py'
Jan 26 16:24:54 compute-0 sudo[162999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:24:54 compute-0 python3.9[163001]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:24:59 compute-0 sudo[162999]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:00 compute-0 sudo[163152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycgwivxsfrolfrcetbrrjtlpffyxhfdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444699.956089-54-104332584668140/AnsiballZ_stat.py'
Jan 26 16:25:00 compute-0 sudo[163152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:00 compute-0 python3.9[163154]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:25:00 compute-0 sudo[163152]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:01 compute-0 sudo[163304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmwsiimilhkmkficpkkfbrhczktjxcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444700.8773057-64-161455074025391/AnsiballZ_command.py'
Jan 26 16:25:01 compute-0 sudo[163304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:01 compute-0 python3.9[163306]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:25:01 compute-0 sudo[163304]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:25:01.700 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:25:01.701 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:25:01.701 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:02 compute-0 sudo[163457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sudureqazbdjdkajyfkgiicdycnnvobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444701.8843968-74-32869355298515/AnsiballZ_stat.py'
Jan 26 16:25:02 compute-0 sudo[163457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:02 compute-0 python3.9[163459]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:25:02 compute-0 sudo[163457]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:03 compute-0 sudo[163609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzlmmsdenustsnzzrocdurnyndipsrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444702.8049214-82-120143231740818/AnsiballZ_command.py'
Jan 26 16:25:03 compute-0 sudo[163609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:03 compute-0 python3.9[163611]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:25:03 compute-0 sudo[163609]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:04 compute-0 sudo[163762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcduuiutllslvfdvdjofyhmtpeweasgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444703.8764272-90-2959073230922/AnsiballZ_stat.py'
Jan 26 16:25:04 compute-0 sudo[163762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:04 compute-0 python3.9[163764]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:25:04 compute-0 sudo[163762]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:05 compute-0 sudo[163885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbtpvolbaokccqatclgwcstglmeisjjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444703.8764272-90-2959073230922/AnsiballZ_copy.py'
Jan 26 16:25:05 compute-0 sudo[163885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:05 compute-0 python3.9[163887]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444703.8764272-90-2959073230922/.source.iscsi _original_basename=.0aclq9x2 follow=False checksum=75b29edc122dedea428d4b70583eba2eea984001 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:05 compute-0 sudo[163885]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:06 compute-0 sudo[164047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uggcwqyvppxwwoyjwdhxksfhzxwebbul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444705.777457-105-139507471617656/AnsiballZ_file.py'
Jan 26 16:25:06 compute-0 sudo[164047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:06 compute-0 podman[164011]: 2026-01-26 16:25:06.458701694 +0000 UTC m=+0.091674980 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:25:06 compute-0 python3.9[164058]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:06 compute-0 sudo[164047]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:08 compute-0 sudo[164211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llfzcupnbyknwsxcpggqcqzlbzbrrhfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444708.3629007-113-22134392291012/AnsiballZ_lineinfile.py'
Jan 26 16:25:08 compute-0 sudo[164211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:09 compute-0 python3.9[164213]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:09 compute-0 sudo[164211]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:10 compute-0 sudo[164379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhskbkupzdbxzqlcmutlfeocubaivazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444709.2744823-122-231912819186697/AnsiballZ_systemd_service.py'
Jan 26 16:25:10 compute-0 sudo[164379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:10 compute-0 podman[164337]: 2026-01-26 16:25:10.166715445 +0000 UTC m=+0.119729274 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:25:10 compute-0 python3.9[164384]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:25:10 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 26 16:25:10 compute-0 sudo[164379]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:11 compute-0 sudo[164544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtzfjeteeczoijytoyncnbkdobyythq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444710.7161837-130-192559903730397/AnsiballZ_systemd_service.py'
Jan 26 16:25:11 compute-0 sudo[164544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:11 compute-0 python3.9[164546]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:25:11 compute-0 systemd[1]: Reloading.
Jan 26 16:25:11 compute-0 systemd-rc-local-generator[164573]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:11 compute-0 systemd-sysv-generator[164577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 16:25:11 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 16:25:11 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 26 16:25:11 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 16:25:11 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 26 16:25:11 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 26 16:25:11 compute-0 sudo[164544]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:12 compute-0 python3.9[164746]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:25:12 compute-0 network[164763]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:25:12 compute-0 network[164764]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:25:12 compute-0 network[164765]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:25:16 compute-0 sudo[165034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idesiyjrtuorksmuleqyxnlyvkdwqyxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444716.4172652-153-82993408222529/AnsiballZ_dnf.py'
Jan 26 16:25:16 compute-0 sudo[165034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:17 compute-0 python3.9[165036]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:25:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:25:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:25:19 compute-0 systemd[1]: Reloading.
Jan 26 16:25:19 compute-0 systemd-sysv-generator[165085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:19 compute-0 systemd-rc-local-generator[165082]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:25:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:25:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:25:19 compute-0 systemd[1]: run-re0f6100ffac642b4b3c1cc4bf66a86d8.service: Deactivated successfully.
Jan 26 16:25:20 compute-0 sudo[165034]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:21 compute-0 sudo[165349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrblmtlyskolcuqahdtlmfgmvxnocnxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444721.2045681-162-24030778305274/AnsiballZ_file.py'
Jan 26 16:25:21 compute-0 sudo[165349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:21 compute-0 python3.9[165351]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 16:25:21 compute-0 sudo[165349]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:22 compute-0 sudo[165501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbwjmzsdakbozzpyqlvvcxknxrtqvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444722.1120336-170-141758965735717/AnsiballZ_modprobe.py'
Jan 26 16:25:22 compute-0 sudo[165501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:22 compute-0 python3.9[165503]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 26 16:25:22 compute-0 sudo[165501]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:23 compute-0 sudo[165657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zomfnlexijhszmpxxkhfuohkmmlyvijh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444723.0383897-178-70777060351958/AnsiballZ_stat.py'
Jan 26 16:25:23 compute-0 sudo[165657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:23 compute-0 python3.9[165659]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:25:23 compute-0 sudo[165657]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:24 compute-0 sudo[165780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihmidmzgxneurzahtlvhrobsfdsadot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444723.0383897-178-70777060351958/AnsiballZ_copy.py'
Jan 26 16:25:24 compute-0 sudo[165780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:24 compute-0 python3.9[165782]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444723.0383897-178-70777060351958/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:24 compute-0 sudo[165780]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:24 compute-0 sudo[165932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxjlxhjodzymowzdzydjogiunctlpta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444724.5767121-194-88747934848349/AnsiballZ_lineinfile.py'
Jan 26 16:25:24 compute-0 sudo[165932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:25 compute-0 python3.9[165934]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:25 compute-0 sudo[165932]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:26 compute-0 sudo[166084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntreumirmbekeyptynevzkeufdwdqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444725.3344698-202-273351396060251/AnsiballZ_systemd.py'
Jan 26 16:25:26 compute-0 sudo[166084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:26 compute-0 python3.9[166086]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:25:26 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 16:25:26 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 16:25:26 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 16:25:26 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 16:25:26 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 16:25:26 compute-0 sudo[166084]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:27 compute-0 sudo[166240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqlwqqtgzbdgsivajkkwuwacroajqry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444726.9971023-210-250372387417972/AnsiballZ_command.py'
Jan 26 16:25:27 compute-0 sudo[166240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:27 compute-0 python3.9[166242]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:25:27 compute-0 sudo[166240]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:28 compute-0 sudo[166393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzbvkjnufgcimatgsdcbutsbtcbxkiyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444727.8606663-220-21471276352296/AnsiballZ_stat.py'
Jan 26 16:25:28 compute-0 sudo[166393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:28 compute-0 python3.9[166395]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:25:28 compute-0 sudo[166393]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:29 compute-0 sudo[166545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwdvfqlooebxkxxpreuxqxbgnwfzaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444728.6671488-229-10354033558533/AnsiballZ_stat.py'
Jan 26 16:25:29 compute-0 sudo[166545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:29 compute-0 python3.9[166547]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:25:29 compute-0 sudo[166545]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:29 compute-0 sudo[166668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqstscqiazdybuggetisrhyvdeoqzlgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444728.6671488-229-10354033558533/AnsiballZ_copy.py'
Jan 26 16:25:29 compute-0 sudo[166668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:30 compute-0 python3.9[166670]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444728.6671488-229-10354033558533/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:30 compute-0 sudo[166668]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:30 compute-0 sudo[166820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgimuerxwlmtaxptifvegeieyjhhwgbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444730.4497235-244-145466749961367/AnsiballZ_command.py'
Jan 26 16:25:30 compute-0 sudo[166820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:31 compute-0 python3.9[166822]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:25:31 compute-0 sudo[166820]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:31 compute-0 sudo[166973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xderawftrwafznwnczywpiuroklnimxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444731.3844156-252-111357088320602/AnsiballZ_lineinfile.py'
Jan 26 16:25:31 compute-0 sudo[166973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:31 compute-0 python3.9[166975]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:31 compute-0 sudo[166973]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:32 compute-0 sudo[167125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnhpbbnugqghzcfarupqjnzzypatlmbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444732.1518898-260-198766208490362/AnsiballZ_replace.py'
Jan 26 16:25:32 compute-0 sudo[167125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:32 compute-0 python3.9[167127]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:33 compute-0 sudo[167125]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:33 compute-0 sudo[167277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawqezkdpnyhsgflalpqsatergcivdab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444733.1808388-268-37438978831159/AnsiballZ_replace.py'
Jan 26 16:25:33 compute-0 sudo[167277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:33 compute-0 python3.9[167279]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:33 compute-0 sudo[167277]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:34 compute-0 sudo[167429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvbeuxjzofwbruwfgnbsfnkxfshgypbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444734.261508-277-2812575626067/AnsiballZ_lineinfile.py'
Jan 26 16:25:34 compute-0 sudo[167429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:34 compute-0 python3.9[167431]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:34 compute-0 sudo[167429]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:35 compute-0 sudo[167581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuxslbjubrnjwkohfmxrelwogbnhosql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444735.1104944-277-101395262876405/AnsiballZ_lineinfile.py'
Jan 26 16:25:35 compute-0 sudo[167581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:35 compute-0 python3.9[167583]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:35 compute-0 sudo[167581]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:36 compute-0 sudo[167733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmjucaardcnjspcqjapxlfmldbicupj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444735.83336-277-232057279388638/AnsiballZ_lineinfile.py'
Jan 26 16:25:36 compute-0 sudo[167733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:36 compute-0 python3.9[167735]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:36 compute-0 sudo[167733]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:36 compute-0 sudo[167897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfhmxbezauidhmohqactapakccynpurg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444736.4932902-277-149391647715821/AnsiballZ_lineinfile.py'
Jan 26 16:25:36 compute-0 sudo[167897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:36 compute-0 podman[167859]: 2026-01-26 16:25:36.83036464 +0000 UTC m=+0.086574672 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 16:25:37 compute-0 python3.9[167905]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:37 compute-0 sudo[167897]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:37 compute-0 sudo[168056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkekxncyhlnuerlrlkefiqpvpnqnscf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444737.2422428-306-104564035575361/AnsiballZ_stat.py'
Jan 26 16:25:37 compute-0 sudo[168056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:37 compute-0 python3.9[168058]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:25:37 compute-0 sudo[168056]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:38 compute-0 sudo[168210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujymmpryeapsspkezjgwgdmmtieovgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444738.0024612-314-22838411582499/AnsiballZ_command.py'
Jan 26 16:25:38 compute-0 sudo[168210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:38 compute-0 python3.9[168212]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:25:38 compute-0 sudo[168210]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:39 compute-0 sudo[168363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsxjtylvayucccnidkjgpvpearwgfwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444738.7181098-323-152984971919617/AnsiballZ_systemd_service.py'
Jan 26 16:25:39 compute-0 sudo[168363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:39 compute-0 python3.9[168365]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:25:39 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 26 16:25:39 compute-0 sudo[168363]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:40 compute-0 sudo[168531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjcplsxubaiwsbutujyxwburgflpsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444739.6028414-331-258930896406374/AnsiballZ_systemd_service.py'
Jan 26 16:25:40 compute-0 sudo[168531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:40 compute-0 podman[168493]: 2026-01-26 16:25:40.364794168 +0000 UTC m=+0.121080331 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 26 16:25:40 compute-0 python3.9[168537]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:25:41 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 26 16:25:41 compute-0 udevadm[168552]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 26 16:25:41 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 26 16:25:41 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 16:25:41 compute-0 multipathd[168555]: --------start up--------
Jan 26 16:25:41 compute-0 multipathd[168555]: read /etc/multipath.conf
Jan 26 16:25:41 compute-0 multipathd[168555]: path checkers start up
Jan 26 16:25:41 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 16:25:41 compute-0 sudo[168531]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:42 compute-0 sudo[168712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqnzpyabglrsqsitnbfywdkqhajkxboe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444742.3129601-343-244562011449813/AnsiballZ_file.py'
Jan 26 16:25:42 compute-0 sudo[168712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:42 compute-0 python3.9[168714]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 16:25:42 compute-0 sudo[168712]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:43 compute-0 sudo[168864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbodwbhyukoskceipmsqzwmjhhxxjtvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444743.0625303-351-109774111714798/AnsiballZ_modprobe.py'
Jan 26 16:25:43 compute-0 sudo[168864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:43 compute-0 python3.9[168866]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 26 16:25:43 compute-0 kernel: Key type psk registered
Jan 26 16:25:43 compute-0 sudo[168864]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:44 compute-0 sudo[169027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnwtylgnelncbxfbxhqizmbdazhnizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444743.8857274-359-139526953782604/AnsiballZ_stat.py'
Jan 26 16:25:44 compute-0 sudo[169027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:44 compute-0 python3.9[169029]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:25:44 compute-0 sudo[169027]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:45 compute-0 sudo[169150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbaiimyofrybrpdkcsqixothdsrpzmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444743.8857274-359-139526953782604/AnsiballZ_copy.py'
Jan 26 16:25:45 compute-0 sudo[169150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:45 compute-0 python3.9[169152]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444743.8857274-359-139526953782604/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:45 compute-0 sudo[169150]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:45 compute-0 sudo[169302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etqmfjziwhkjxpqwpitdlfkxlverlkgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444745.6116517-375-218524242013053/AnsiballZ_lineinfile.py'
Jan 26 16:25:45 compute-0 sudo[169302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:46 compute-0 python3.9[169304]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:46 compute-0 sudo[169302]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:46 compute-0 sudo[169454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdvgdvidvwozyicwcqxvqztlcflfuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444746.33777-383-174031990327905/AnsiballZ_systemd.py'
Jan 26 16:25:46 compute-0 sudo[169454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:46 compute-0 python3.9[169456]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:25:46 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 16:25:46 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 16:25:46 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 16:25:46 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 16:25:46 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 16:25:47 compute-0 sudo[169454]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:47 compute-0 sudo[169610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqukcfnyxwllpykptanqueoglktbqlsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444747.4193127-391-84912325951105/AnsiballZ_dnf.py'
Jan 26 16:25:47 compute-0 sudo[169610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:48 compute-0 python3.9[169612]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:25:50 compute-0 systemd[1]: Reloading.
Jan 26 16:25:50 compute-0 systemd-rc-local-generator[169644]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:50 compute-0 systemd-sysv-generator[169649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:51 compute-0 systemd[1]: Reloading.
Jan 26 16:25:51 compute-0 systemd-rc-local-generator[169681]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:51 compute-0 systemd-sysv-generator[169684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:51 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 16:25:51 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 16:25:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 16:25:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 16:25:51 compute-0 systemd[1]: Reloading.
Jan 26 16:25:51 compute-0 systemd-rc-local-generator[169775]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:51 compute-0 systemd-sysv-generator[169778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 16:25:52 compute-0 sudo[169610]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:53 compute-0 sudo[171062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshnlwgqisohoajdagptqnuehnfriguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444752.7070932-399-138287589729818/AnsiballZ_systemd_service.py'
Jan 26 16:25:53 compute-0 sudo[171062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:53 compute-0 python3.9[171076]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:25:53 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 26 16:25:53 compute-0 iscsid[164587]: iscsid shutting down.
Jan 26 16:25:53 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 26 16:25:53 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 26 16:25:53 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 16:25:53 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 16:25:53 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 16:25:53 compute-0 sudo[171062]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 16:25:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 16:25:53 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.606s CPU time.
Jan 26 16:25:53 compute-0 systemd[1]: run-rdf8233f0ab3043d8a486e2a071fec511.service: Deactivated successfully.
Jan 26 16:25:53 compute-0 sudo[171231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmrvfsxkwlytshtbfecoclqjmllqjveu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444753.6666672-407-45795350644562/AnsiballZ_systemd_service.py'
Jan 26 16:25:53 compute-0 sudo[171231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:54 compute-0 python3.9[171233]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:25:54 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 26 16:25:54 compute-0 multipathd[168555]: exit (signal)
Jan 26 16:25:54 compute-0 multipathd[168555]: --------shut down-------
Jan 26 16:25:54 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 26 16:25:54 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 26 16:25:54 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 16:25:54 compute-0 multipathd[171239]: --------start up--------
Jan 26 16:25:54 compute-0 multipathd[171239]: read /etc/multipath.conf
Jan 26 16:25:54 compute-0 multipathd[171239]: path checkers start up
Jan 26 16:25:54 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 16:25:54 compute-0 sudo[171231]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:55 compute-0 python3.9[171396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:25:56 compute-0 sudo[171550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqqijehsxfnrkwugomblcwgtbvlaiums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444755.7730904-425-234201349449024/AnsiballZ_file.py'
Jan 26 16:25:56 compute-0 sudo[171550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:56 compute-0 python3.9[171552]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:25:56 compute-0 sudo[171550]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:57 compute-0 sudo[171702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgynwncsqsmciwfefhflzvtgwewehdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444756.9189212-436-112156122408645/AnsiballZ_systemd_service.py'
Jan 26 16:25:57 compute-0 sudo[171702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:25:57 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 26 16:25:57 compute-0 python3.9[171704]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:25:57 compute-0 systemd[1]: Reloading.
Jan 26 16:25:57 compute-0 systemd-rc-local-generator[171733]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:25:57 compute-0 systemd-sysv-generator[171737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:25:57 compute-0 sudo[171702]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:58 compute-0 python3.9[171889]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:25:58 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 16:25:58 compute-0 network[171907]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:25:58 compute-0 network[171908]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:25:58 compute-0 network[171909]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:25:59 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 26 16:26:00 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 16:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:26:01.700 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:26:01.702 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:26:01.702 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:03 compute-0 sudo[172181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpocnvddqvlgrkqjcwolcvheomzgiddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444763.1269941-455-145710985636796/AnsiballZ_systemd_service.py'
Jan 26 16:26:03 compute-0 sudo[172181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:03 compute-0 python3.9[172183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:03 compute-0 sudo[172181]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:04 compute-0 sudo[172334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpyheghrxtbbnuoldvyxxbkpnusijdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444763.9927845-455-93779540298962/AnsiballZ_systemd_service.py'
Jan 26 16:26:04 compute-0 sudo[172334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:04 compute-0 python3.9[172336]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:04 compute-0 sudo[172334]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:05 compute-0 sudo[172487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfhxwemrqazebsyzznvoocgrtcvfwsqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444764.865775-455-212490431840144/AnsiballZ_systemd_service.py'
Jan 26 16:26:05 compute-0 sudo[172487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:05 compute-0 python3.9[172489]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:05 compute-0 sudo[172487]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:05 compute-0 sudo[172640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkpmbwiswvdgukyljgfebdetehbyucuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444765.6130471-455-86942427471268/AnsiballZ_systemd_service.py'
Jan 26 16:26:05 compute-0 sudo[172640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:06 compute-0 python3.9[172642]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:06 compute-0 sudo[172640]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:06 compute-0 sudo[172793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzoxvvowmyftnjxotekiqlrllsjbpbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444766.3526525-455-195800713524372/AnsiballZ_systemd_service.py'
Jan 26 16:26:06 compute-0 sudo[172793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:06 compute-0 python3.9[172795]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:07 compute-0 sudo[172793]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:07 compute-0 podman[172797]: 2026-01-26 16:26:07.091422968 +0000 UTC m=+0.085363440 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 16:26:07 compute-0 sudo[172965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrhvercchmfyvjsuoiqynzeugdlnblm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444767.1777396-455-217028789602471/AnsiballZ_systemd_service.py'
Jan 26 16:26:07 compute-0 sudo[172965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:07 compute-0 python3.9[172967]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:07 compute-0 sudo[172965]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:08 compute-0 sudo[173118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsblweajxnostpsqsefdafrlohghptlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444768.0753996-455-15891967890548/AnsiballZ_systemd_service.py'
Jan 26 16:26:08 compute-0 sudo[173118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:08 compute-0 python3.9[173120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:08 compute-0 sudo[173118]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:09 compute-0 sudo[173271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwhquagrxlvtvdtobuefecirvrreimip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444768.8950312-455-75695405829769/AnsiballZ_systemd_service.py'
Jan 26 16:26:09 compute-0 sudo[173271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:09 compute-0 python3.9[173273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:26:09 compute-0 sudo[173271]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:10 compute-0 sudo[173424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amjyuyiaiutvxwoynlvhgfpfrmvyvcbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444770.1147654-514-21507595365585/AnsiballZ_file.py'
Jan 26 16:26:10 compute-0 sudo[173424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:10 compute-0 podman[173426]: 2026-01-26 16:26:10.538519535 +0000 UTC m=+0.100377644 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:26:10 compute-0 python3.9[173427]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:10 compute-0 sudo[173424]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:11 compute-0 sudo[173602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxsgbcdsboysunedvvazkozvioeqnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444770.7686594-514-108141597596010/AnsiballZ_file.py'
Jan 26 16:26:11 compute-0 sudo[173602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:11 compute-0 python3.9[173604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:11 compute-0 sudo[173602]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:11 compute-0 sudo[173754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nertrbrdbhwhvwdnmnrhsdvpvqysfvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444771.476259-514-173775080983704/AnsiballZ_file.py'
Jan 26 16:26:11 compute-0 sudo[173754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:11 compute-0 python3.9[173756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:11 compute-0 sudo[173754]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:12 compute-0 sudo[173906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebrgotftmjzwkdxnuymaiyjxhaieqmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444772.090913-514-234589913636016/AnsiballZ_file.py'
Jan 26 16:26:12 compute-0 sudo[173906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:12 compute-0 python3.9[173908]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:12 compute-0 sudo[173906]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:13 compute-0 sudo[174058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpttzsxnxvmtrrigjilmgydnnvxqtuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444772.772926-514-63110411323318/AnsiballZ_file.py'
Jan 26 16:26:13 compute-0 sudo[174058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:13 compute-0 python3.9[174060]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:13 compute-0 sudo[174058]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:13 compute-0 sudo[174210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gleouzdydxthhpnsmoafnhzgvqxwofae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444773.4227338-514-78511874185683/AnsiballZ_file.py'
Jan 26 16:26:13 compute-0 sudo[174210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:13 compute-0 python3.9[174212]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:13 compute-0 sudo[174210]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:14 compute-0 sudo[174362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywwnxfjazzputvdeycmqvsviqpqwxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444774.0509603-514-246031531683781/AnsiballZ_file.py'
Jan 26 16:26:14 compute-0 sudo[174362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:14 compute-0 python3.9[174364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:14 compute-0 sudo[174362]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:15 compute-0 sudo[174514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobptiwatjelggobbmfyffajuzpdrcms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444774.8891957-514-263121089686737/AnsiballZ_file.py'
Jan 26 16:26:15 compute-0 sudo[174514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:15 compute-0 python3.9[174516]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:15 compute-0 sudo[174514]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:15 compute-0 sudo[174666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpegcudqerlkhmhobvugvysaymeffzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444775.6303012-571-92605850577223/AnsiballZ_file.py'
Jan 26 16:26:15 compute-0 sudo[174666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:16 compute-0 python3.9[174668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:16 compute-0 sudo[174666]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:16 compute-0 sudo[174818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwqlurpgijwtjnapphhxxmpvldheutm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444776.3238263-571-106405600324093/AnsiballZ_file.py'
Jan 26 16:26:16 compute-0 sudo[174818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:16 compute-0 python3.9[174820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:16 compute-0 sudo[174818]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:17 compute-0 sudo[174970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xioywazexaitpmiydmztfwjxhgjcdmox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444776.9624305-571-165322137109954/AnsiballZ_file.py'
Jan 26 16:26:17 compute-0 sudo[174970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:17 compute-0 python3.9[174972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:17 compute-0 sudo[174970]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:17 compute-0 sudo[175122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktyegsqdqeljrqsmyfnkdmwmjzbmyfef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444777.679381-571-198753676811054/AnsiballZ_file.py'
Jan 26 16:26:17 compute-0 sudo[175122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:18 compute-0 python3.9[175124]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:18 compute-0 sudo[175122]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:18 compute-0 sudo[175274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqwhwrmigpdicouofbmtmoqkkydgzks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444778.2870634-571-121030322736154/AnsiballZ_file.py'
Jan 26 16:26:18 compute-0 sudo[175274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:18 compute-0 python3.9[175276]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:18 compute-0 sudo[175274]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:19 compute-0 sudo[175426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzatzhhuazirvnrhobgzkaciodoyeang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444779.0936785-571-112466596537485/AnsiballZ_file.py'
Jan 26 16:26:19 compute-0 sudo[175426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:19 compute-0 python3.9[175428]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:19 compute-0 sudo[175426]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:20 compute-0 sudo[175578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgrhdpuzjtkpwblrmqzgsfztyohnyct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444779.9661534-571-23261625317765/AnsiballZ_file.py'
Jan 26 16:26:20 compute-0 sudo[175578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:20 compute-0 python3.9[175580]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:20 compute-0 sudo[175578]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:20 compute-0 sudo[175730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btszpqhdykilwvxpmtvqgveyfacyifbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444780.6882906-571-76590253517354/AnsiballZ_file.py'
Jan 26 16:26:20 compute-0 sudo[175730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:21 compute-0 python3.9[175732]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:21 compute-0 sudo[175730]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:21 compute-0 sudo[175882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagznshedtnnyzwekkrxcyfmygjttgcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444781.5102286-629-202238570721229/AnsiballZ_command.py'
Jan 26 16:26:21 compute-0 sudo[175882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:22 compute-0 python3.9[175884]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:22 compute-0 sudo[175882]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:22 compute-0 python3.9[176036]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:26:23 compute-0 sudo[176186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbzuanvezkperkuhnlnmkeoppxdokhiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444783.2289193-647-213779021080030/AnsiballZ_systemd_service.py'
Jan 26 16:26:23 compute-0 sudo[176186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:23 compute-0 python3.9[176188]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:26:23 compute-0 systemd[1]: Reloading.
Jan 26 16:26:23 compute-0 systemd-rc-local-generator[176214]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:26:23 compute-0 systemd-sysv-generator[176217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:26:24 compute-0 sudo[176186]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:24 compute-0 sudo[176374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvfkdhhkjfidfutgccrtqsightfdgff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444784.3703933-655-28729942695877/AnsiballZ_command.py'
Jan 26 16:26:24 compute-0 sudo[176374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:24 compute-0 python3.9[176376]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:24 compute-0 sudo[176374]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:25 compute-0 sudo[176527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnshatbdccnuilywegnomlfjhrordbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444785.055957-655-251745807222915/AnsiballZ_command.py'
Jan 26 16:26:25 compute-0 sudo[176527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:25 compute-0 python3.9[176529]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:25 compute-0 sudo[176527]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:26 compute-0 sudo[176680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehlbzzfovjayyfuwgpkqobwrzjzjadyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444785.8147802-655-205491267707707/AnsiballZ_command.py'
Jan 26 16:26:26 compute-0 sudo[176680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:26 compute-0 python3.9[176682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:26 compute-0 sudo[176680]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:27 compute-0 sudo[176833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawourlepasycouhsbiqywqtnindvagx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444786.5492084-655-253427330821812/AnsiballZ_command.py'
Jan 26 16:26:27 compute-0 sudo[176833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:27 compute-0 python3.9[176835]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:27 compute-0 sudo[176833]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:27 compute-0 sudo[176986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltvgqoqhoobxspyedauuozooybumjpdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444787.486901-655-183348402612870/AnsiballZ_command.py'
Jan 26 16:26:27 compute-0 sudo[176986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:27 compute-0 python3.9[176988]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:28 compute-0 sudo[176986]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:28 compute-0 sudo[177139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqupnphfypxlvkrwwajoxyfrvmgyozta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444788.1972585-655-23787365964117/AnsiballZ_command.py'
Jan 26 16:26:28 compute-0 sudo[177139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:28 compute-0 python3.9[177141]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:28 compute-0 sudo[177139]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:29 compute-0 sudo[177292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwskdxkwzeidlcuuzgaivbxcwgdkzmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444788.9716523-655-179147307671986/AnsiballZ_command.py'
Jan 26 16:26:29 compute-0 sudo[177292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:29 compute-0 python3.9[177294]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:29 compute-0 sudo[177292]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:30 compute-0 sudo[177445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcscnutxdgapcpcauuvhtjqvfoktmczm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444789.6771212-655-86699374496600/AnsiballZ_command.py'
Jan 26 16:26:30 compute-0 sudo[177445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:30 compute-0 python3.9[177447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:26:30 compute-0 sudo[177445]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:31 compute-0 sudo[177598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxpmtxzvfctsflzbwhfolymjkmykoau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444791.4343703-734-227074360977788/AnsiballZ_file.py'
Jan 26 16:26:31 compute-0 sudo[177598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:31 compute-0 python3.9[177600]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:31 compute-0 sudo[177598]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:32 compute-0 sudo[177750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjfvtcjozrljhmkigolnhdmmmbcpnyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444792.137275-734-279940500287558/AnsiballZ_file.py'
Jan 26 16:26:32 compute-0 sudo[177750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:32 compute-0 python3.9[177752]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:32 compute-0 sudo[177750]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:33 compute-0 sudo[177902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdhuukkmarqbdxyllzbmxgkomjchxcia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444792.8024366-734-166787638472914/AnsiballZ_file.py'
Jan 26 16:26:33 compute-0 sudo[177902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:33 compute-0 python3.9[177904]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:33 compute-0 sudo[177902]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:33 compute-0 sudo[178054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtkdrvvhxrvcdfaqygxjiyyzyehhqdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444793.5417638-756-276316456742485/AnsiballZ_file.py'
Jan 26 16:26:33 compute-0 sudo[178054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:34 compute-0 python3.9[178056]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:34 compute-0 sudo[178054]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:34 compute-0 sudo[178206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxcgiyonxdutzyvukgyayswpuhysnpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444794.2734916-756-154998644733149/AnsiballZ_file.py'
Jan 26 16:26:34 compute-0 sudo[178206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:34 compute-0 python3.9[178208]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:34 compute-0 sudo[178206]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:35 compute-0 sudo[178358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knkhlmycpnuszltcjpddthidjunncypb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444794.9826908-756-209482772603367/AnsiballZ_file.py'
Jan 26 16:26:35 compute-0 sudo[178358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:35 compute-0 python3.9[178360]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:35 compute-0 sudo[178358]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:36 compute-0 sudo[178510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbeenqffjknhjyuslstizipdxucdaur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444795.749267-756-250867481630388/AnsiballZ_file.py'
Jan 26 16:26:36 compute-0 sudo[178510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:36 compute-0 python3.9[178512]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:36 compute-0 sudo[178510]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:36 compute-0 sudo[178662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjcvdloxxfxrnozgaifqdictlofbdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444796.4503257-756-45476450989986/AnsiballZ_file.py'
Jan 26 16:26:36 compute-0 sudo[178662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:36 compute-0 python3.9[178664]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:36 compute-0 sudo[178662]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:37 compute-0 sudo[178824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatbljceinsoqftcwirqrvdqgvfncpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444797.1636124-756-10057507631883/AnsiballZ_file.py'
Jan 26 16:26:37 compute-0 sudo[178824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:37 compute-0 podman[178788]: 2026-01-26 16:26:37.522996825 +0000 UTC m=+0.074733551 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:26:37 compute-0 python3.9[178834]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:37 compute-0 sudo[178824]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:38 compute-0 sudo[178984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajhzmfulwezaukvwadjictuofapveici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444797.8607502-756-79244968353519/AnsiballZ_file.py'
Jan 26 16:26:38 compute-0 sudo[178984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:38 compute-0 python3.9[178986]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:38 compute-0 sudo[178984]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:41 compute-0 podman[179011]: 2026-01-26 16:26:41.220348532 +0000 UTC m=+0.104048922 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:26:43 compute-0 sudo[179162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkfbpoluuptatqxtoqvzmorrlwnnetf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444802.8773537-925-99727251647320/AnsiballZ_getent.py'
Jan 26 16:26:43 compute-0 sudo[179162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:43 compute-0 python3.9[179164]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 26 16:26:43 compute-0 sudo[179162]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:44 compute-0 sudo[179315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goukuyufmhzegbvruetmqgvgilwdeokb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444803.9047241-933-264663602560881/AnsiballZ_group.py'
Jan 26 16:26:44 compute-0 sudo[179315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:44 compute-0 python3.9[179317]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:26:44 compute-0 groupadd[179318]: group added to /etc/group: name=nova, GID=42436
Jan 26 16:26:44 compute-0 groupadd[179318]: group added to /etc/gshadow: name=nova
Jan 26 16:26:44 compute-0 groupadd[179318]: new group: name=nova, GID=42436
Jan 26 16:26:44 compute-0 sudo[179315]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:45 compute-0 sudo[179473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdkvqggrsxlipdsfsypaabcsouirvdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444804.9343655-941-76698282898150/AnsiballZ_user.py'
Jan 26 16:26:45 compute-0 sudo[179473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:45 compute-0 python3.9[179475]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 16:26:45 compute-0 useradd[179477]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 26 16:26:45 compute-0 useradd[179477]: add 'nova' to group 'libvirt'
Jan 26 16:26:45 compute-0 useradd[179477]: add 'nova' to shadow group 'libvirt'
Jan 26 16:26:45 compute-0 sudo[179473]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:46 compute-0 sshd-session[179508]: Accepted publickey for zuul from 192.168.122.30 port 44408 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:26:46 compute-0 systemd-logind[788]: New session 25 of user zuul.
Jan 26 16:26:46 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 26 16:26:47 compute-0 sshd-session[179508]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:26:47 compute-0 sshd-session[179511]: Received disconnect from 192.168.122.30 port 44408:11: disconnected by user
Jan 26 16:26:47 compute-0 sshd-session[179511]: Disconnected from user zuul 192.168.122.30 port 44408
Jan 26 16:26:47 compute-0 sshd-session[179508]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:26:47 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 26 16:26:47 compute-0 systemd-logind[788]: Session 25 logged out. Waiting for processes to exit.
Jan 26 16:26:47 compute-0 systemd-logind[788]: Removed session 25.
Jan 26 16:26:47 compute-0 python3.9[179661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:48 compute-0 python3.9[179782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444807.3589602-966-227808720754993/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:49 compute-0 python3.9[179932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:49 compute-0 python3.9[180008]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:50 compute-0 python3.9[180158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:50 compute-0 python3.9[180279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444809.7713785-966-211018566597216/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:51 compute-0 python3.9[180429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:52 compute-0 python3.9[180550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444810.9725616-966-281468271447941/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:52 compute-0 python3.9[180700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:53 compute-0 python3.9[180821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444812.3772082-966-3004087197906/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:54 compute-0 python3.9[180971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:54 compute-0 python3.9[181092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444813.672502-966-206085490270612/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:26:55 compute-0 sudo[181242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadhzhgbqzdgxmorrnzmztgnzzwiwelv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444815.055645-1049-226425681768023/AnsiballZ_file.py'
Jan 26 16:26:55 compute-0 sudo[181242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:55 compute-0 python3.9[181244]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:55 compute-0 sudo[181242]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:56 compute-0 sudo[181394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgmydsdsvfmlosucrlfgresqakiojjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444815.7976155-1057-140545754696769/AnsiballZ_copy.py'
Jan 26 16:26:56 compute-0 sudo[181394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:56 compute-0 python3.9[181396]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:26:56 compute-0 sudo[181394]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:57 compute-0 sudo[181546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjldaffpbuldspxoxtovtiuurkhguwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444816.722264-1065-200051922027799/AnsiballZ_stat.py'
Jan 26 16:26:57 compute-0 sudo[181546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:57 compute-0 python3.9[181548]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:26:57 compute-0 sudo[181546]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:57 compute-0 sudo[181698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguzvrbszwmljfoqalssbfnjnbkrbtzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444817.4388275-1073-107252179306348/AnsiballZ_stat.py'
Jan 26 16:26:57 compute-0 sudo[181698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:57 compute-0 python3.9[181700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:26:58 compute-0 sudo[181698]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:58 compute-0 sudo[181821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krymokgvaveedteuxpbertiwbgdmrjcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444817.4388275-1073-107252179306348/AnsiballZ_copy.py'
Jan 26 16:26:58 compute-0 sudo[181821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:26:58 compute-0 python3.9[181823]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769444817.4388275-1073-107252179306348/.source _original_basename=.pdas8oem follow=False checksum=b75f8e8ea90a4aa696b732c86f896b6886f0972b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 26 16:26:58 compute-0 sudo[181821]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:59 compute-0 python3.9[181975]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:00 compute-0 python3.9[182127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:00 compute-0 python3.9[182248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444819.7154653-1099-106949195505956/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:27:01 compute-0 python3.9[182398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:27:01.701 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:27:01.702 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:27:01.703 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:02 compute-0 python3.9[182519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444820.9528224-1114-160643974762246/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:27:02 compute-0 sudo[182669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizrpcrpehgpqvcotvuqmmdexmkkmssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444822.5018828-1131-13724477562474/AnsiballZ_container_config_data.py'
Jan 26 16:27:02 compute-0 sudo[182669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:03 compute-0 python3.9[182671]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 26 16:27:03 compute-0 sudo[182669]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:04 compute-0 sudo[182821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjkrtfpytvrfhtmzfpsfddvbonuhqujx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444823.5835385-1142-260747087870864/AnsiballZ_container_config_hash.py'
Jan 26 16:27:04 compute-0 sudo[182821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:04 compute-0 python3.9[182823]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:27:04 compute-0 sudo[182821]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:05 compute-0 sudo[182973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphldnzqfesettnfnesxrzpggphhkgqr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444824.6273236-1152-115422267171525/AnsiballZ_edpm_container_manage.py'
Jan 26 16:27:05 compute-0 sudo[182973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:05 compute-0 python3[182975]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:27:05 compute-0 podman[183010]: 2026-01-26 16:27:05.743219948 +0000 UTC m=+0.054350569 container create 37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:27:05 compute-0 podman[183010]: 2026-01-26 16:27:05.713587549 +0000 UTC m=+0.024718150 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 16:27:05 compute-0 python3[182975]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 26 16:27:05 compute-0 sudo[182973]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:06 compute-0 sudo[183197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grddkkvbbhqsypedayibqvqowxxafzgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444826.2271357-1160-255307157214215/AnsiballZ_stat.py'
Jan 26 16:27:06 compute-0 sudo[183197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:06 compute-0 python3.9[183199]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:06 compute-0 sudo[183197]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:07 compute-0 sudo[183362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjlwpojnlfffskuatohpajxltcyppnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444827.3158426-1172-125154132868381/AnsiballZ_container_config_data.py'
Jan 26 16:27:07 compute-0 sudo[183362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:07 compute-0 podman[183325]: 2026-01-26 16:27:07.69296285 +0000 UTC m=+0.081289363 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:27:07 compute-0 python3.9[183368]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 26 16:27:07 compute-0 sudo[183362]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:08 compute-0 sudo[183522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmxrhrgardaotroygxiudoofoisulbtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444828.1872504-1183-225777005225514/AnsiballZ_container_config_hash.py'
Jan 26 16:27:08 compute-0 sudo[183522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:08 compute-0 python3.9[183524]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:27:08 compute-0 sudo[183522]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:09 compute-0 sudo[183674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfgwluimawdamajqjyfsvkvyrtkhuboo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444829.075552-1193-51885773936928/AnsiballZ_edpm_container_manage.py'
Jan 26 16:27:09 compute-0 sudo[183674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:09 compute-0 python3[183676]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:27:09 compute-0 podman[183710]: 2026-01-26 16:27:09.975173037 +0000 UTC m=+0.063304630 container create 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, container_name=nova_compute)
Jan 26 16:27:09 compute-0 podman[183710]: 2026-01-26 16:27:09.946099022 +0000 UTC m=+0.034230595 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 16:27:09 compute-0 python3[183676]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 26 16:27:10 compute-0 sudo[183674]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:10 compute-0 sudo[183897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnugbeggniwwyuqmalrezrrqpjmfovhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444830.3117828-1201-203759146216172/AnsiballZ_stat.py'
Jan 26 16:27:10 compute-0 sudo[183897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:10 compute-0 python3.9[183899]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:10 compute-0 sudo[183897]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:11 compute-0 sudo[184066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdriqwytpdleoqizpkieiycpuvukrdfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444831.0963316-1210-79633822374162/AnsiballZ_file.py'
Jan 26 16:27:11 compute-0 sudo[184066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:11 compute-0 podman[184025]: 2026-01-26 16:27:11.475122275 +0000 UTC m=+0.088105349 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 16:27:11 compute-0 python3.9[184073]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:11 compute-0 sudo[184066]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:12 compute-0 sudo[184228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcihcyhxznkaluyydgtdppsdblqsvxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444831.7101736-1210-189559814478417/AnsiballZ_copy.py'
Jan 26 16:27:12 compute-0 sudo[184228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:12 compute-0 python3.9[184230]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444831.7101736-1210-189559814478417/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:12 compute-0 sudo[184228]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:12 compute-0 sudo[184304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoakimhpuvulzepsrnrcqghfhlovuagr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444831.7101736-1210-189559814478417/AnsiballZ_systemd.py'
Jan 26 16:27:12 compute-0 sudo[184304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:12 compute-0 python3.9[184306]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:27:12 compute-0 systemd[1]: Reloading.
Jan 26 16:27:13 compute-0 systemd-rc-local-generator[184333]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:27:13 compute-0 systemd-sysv-generator[184340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:27:13 compute-0 sudo[184304]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:13 compute-0 sudo[184415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxwnkkjrbziufvsgdzqnretmdlvxqwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444831.7101736-1210-189559814478417/AnsiballZ_systemd.py'
Jan 26 16:27:13 compute-0 sudo[184415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:13 compute-0 python3.9[184417]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:27:13 compute-0 systemd[1]: Reloading.
Jan 26 16:27:14 compute-0 systemd-rc-local-generator[184448]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:27:14 compute-0 systemd-sysv-generator[184452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:27:14 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 16:27:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:14 compute-0 podman[184458]: 2026-01-26 16:27:14.385026117 +0000 UTC m=+0.128694230 container init 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:27:14 compute-0 podman[184458]: 2026-01-26 16:27:14.397589836 +0000 UTC m=+0.141257919 container start 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251202)
Jan 26 16:27:14 compute-0 podman[184458]: nova_compute
Jan 26 16:27:14 compute-0 nova_compute[184474]: + sudo -E kolla_set_configs
Jan 26 16:27:14 compute-0 systemd[1]: Started nova_compute container.
Jan 26 16:27:14 compute-0 sudo[184415]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Validating config file
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying service configuration files
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Deleting /etc/ceph
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Creating directory /etc/ceph
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Writing out command to execute
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:14 compute-0 nova_compute[184474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 16:27:14 compute-0 nova_compute[184474]: ++ cat /run_command
Jan 26 16:27:14 compute-0 nova_compute[184474]: + CMD=nova-compute
Jan 26 16:27:14 compute-0 nova_compute[184474]: + ARGS=
Jan 26 16:27:14 compute-0 nova_compute[184474]: + sudo kolla_copy_cacerts
Jan 26 16:27:14 compute-0 nova_compute[184474]: + [[ ! -n '' ]]
Jan 26 16:27:14 compute-0 nova_compute[184474]: + . kolla_extend_start
Jan 26 16:27:14 compute-0 nova_compute[184474]: Running command: 'nova-compute'
Jan 26 16:27:14 compute-0 nova_compute[184474]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 16:27:14 compute-0 nova_compute[184474]: + umask 0022
Jan 26 16:27:14 compute-0 nova_compute[184474]: + exec nova-compute
Jan 26 16:27:15 compute-0 python3.9[184635]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:16 compute-0 python3.9[184786]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.595 184478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.595 184478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.596 184478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.596 184478 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.749 184478 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.775 184478 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:16 compute-0 nova_compute[184474]: 2026-01-26 16:27:16.775 184478 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 16:27:16 compute-0 python3.9[184938]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.470 184478 INFO nova.virt.driver [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.616 184478 INFO nova.compute.provider_config [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.726 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.727 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.727 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.728 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.728 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.728 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.728 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.729 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.730 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.730 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.730 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.730 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.730 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.731 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.732 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.732 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.732 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.732 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.733 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.733 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.733 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.733 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.734 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.734 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.734 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.734 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.734 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.735 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.735 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.735 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.735 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.735 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.736 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.736 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.736 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.736 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.736 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.737 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.738 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.739 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.740 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.741 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.741 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.741 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.741 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.741 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.742 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.743 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.744 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.745 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.746 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.747 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.748 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.748 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.748 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.748 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.748 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.749 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.750 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.750 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.750 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.750 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.750 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.751 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.751 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.751 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.752 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.752 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.752 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.752 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.753 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.753 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.753 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.754 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.755 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.756 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.757 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.758 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.759 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.759 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.759 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.759 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.759 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.760 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.760 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.760 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.760 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.760 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.761 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.762 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.762 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.762 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.762 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.762 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.763 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.763 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.763 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.763 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.764 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.765 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.766 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.767 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.768 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.769 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.770 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.771 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.772 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.773 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.774 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.775 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.776 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.777 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.778 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.779 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.780 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.781 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.782 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.783 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.784 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.785 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.786 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.787 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.787 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.787 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.787 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.788 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.789 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.790 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.791 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.792 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.792 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.792 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.792 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.792 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.793 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.794 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.795 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.796 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.797 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.798 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.799 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.800 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.801 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.801 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.801 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.801 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.801 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.802 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.803 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.804 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.805 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.806 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.806 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.806 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.806 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.807 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 WARNING oslo_config.cfg [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 16:27:17 compute-0 nova_compute[184474]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 16:27:17 compute-0 nova_compute[184474]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 16:27:17 compute-0 nova_compute[184474]: and ``live_migration_inbound_addr`` respectively.
Jan 26 16:27:17 compute-0 nova_compute[184474]: ).  Its value may be silently ignored in the future.
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.808 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.809 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.810 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.811 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.812 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.813 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.814 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.815 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.816 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.817 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.818 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.819 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.820 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.821 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.822 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.823 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.824 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.825 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.826 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.827 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.828 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.829 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.829 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.829 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.829 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.829 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.830 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.831 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.832 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.832 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.832 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.832 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.832 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.833 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.834 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.835 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.835 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.835 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.835 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.835 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.836 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.836 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.836 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.836 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.836 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.837 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.838 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.838 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.838 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.838 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.838 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.839 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.839 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.839 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.840 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.840 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.840 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.840 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.841 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.841 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.841 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.841 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.841 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.842 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.843 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.844 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.844 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.844 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.844 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.844 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.845 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.845 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.845 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.845 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.846 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.846 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.846 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.846 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.846 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.847 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.847 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.848 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.849 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.849 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.849 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.849 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.850 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.850 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.850 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.850 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.850 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.851 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.851 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.851 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.851 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.852 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.852 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.852 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.852 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.852 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.853 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.853 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.853 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.853 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.854 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.855 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.855 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.855 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.855 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.855 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.856 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.856 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.856 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.856 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.856 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.857 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.857 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.857 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.857 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.857 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.858 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.858 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.858 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.858 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.858 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.859 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.859 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.859 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.859 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.859 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.860 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.860 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.860 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.860 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.861 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.862 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.862 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.862 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.862 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.862 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.863 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.863 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.863 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.863 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.863 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.864 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.864 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.864 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.864 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.864 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.865 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.865 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.865 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.865 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.865 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.866 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.866 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.866 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.866 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.866 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.867 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.867 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.867 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.867 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.867 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.868 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.868 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.868 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.868 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.868 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.869 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.869 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.869 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.869 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.869 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.870 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.870 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.870 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.870 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.870 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.871 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.872 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.872 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.872 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.872 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.872 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.873 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.873 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.873 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.873 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.873 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.874 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.875 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.876 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.877 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.877 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.877 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.877 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.877 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.878 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.878 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.878 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.878 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.878 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.879 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.879 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.879 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.879 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.879 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.880 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.880 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.880 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.880 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.880 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.881 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.881 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.881 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.881 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.881 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.882 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.882 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.882 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.882 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.882 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.883 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.883 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.883 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.883 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.883 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.884 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.884 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.884 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.884 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.884 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.885 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.885 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.885 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.885 184478 DEBUG oslo_service.service [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.886 184478 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 26 16:27:17 compute-0 sudo[185090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmtpjrhpwwuszhxkqtcyadlacwjpdxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444837.1781738-1270-130427595096133/AnsiballZ_podman_container.py'
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.907 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.908 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.908 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 26 16:27:17 compute-0 sudo[185090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.908 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 26 16:27:17 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 16:27:17 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.981 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5ade08a790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.984 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5ade08a790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 26 16:27:17 compute-0 nova_compute[184474]: 2026-01-26 16:27:17.984 184478 INFO nova.virt.libvirt.driver [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Connection event '1' reason 'None'
Jan 26 16:27:18 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.001 184478 WARNING nova.virt.libvirt.driver [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 16:27:18 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.002 184478 DEBUG nova.virt.libvirt.volume.mount [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 26 16:27:18 compute-0 python3.9[185094]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 16:27:18 compute-0 sudo[185090]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:27:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:27:18 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.925 184478 INFO nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 16:27:18 compute-0 nova_compute[184474]: 
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <host>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <uuid>07141d90-ae2c-4848-91d9-402155316ee1</uuid>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <cpu>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <arch>x86_64</arch>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model>EPYC-Rome-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <vendor>AMD</vendor>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <microcode version='16777317'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <signature family='23' model='49' stepping='0'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='x2apic'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='tsc-deadline'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='osxsave'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='hypervisor'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='tsc_adjust'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='spec-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='stibp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='arch-capabilities'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='ssbd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='cmp_legacy'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='topoext'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='virt-ssbd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='lbrv'/>
Jan 26 16:27:18 compute-0 sudo[185323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifaeybucsspkkughjtlwqwtozqkecqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444838.6047955-1278-33126467747343/AnsiballZ_systemd.py'
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='tsc-scale'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='vmcb-clean'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='pause-filter'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='pfthreshold'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='svme-addr-chk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='rdctl-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='skip-l1dfl-vmentry'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='mds-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature name='pschange-mc-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <pages unit='KiB' size='4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <pages unit='KiB' size='2048'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <pages unit='KiB' size='1048576'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </cpu>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <power_management>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <suspend_mem/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <suspend_disk/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <suspend_hybrid/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </power_management>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <iommu support='no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <migration_features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <live/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <uri_transports>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <uri_transport>tcp</uri_transport>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <uri_transport>rdma</uri_transport>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </uri_transports>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </migration_features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <topology>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <cells num='1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <cell id='0'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <memory unit='KiB'>7864316</memory>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <pages unit='KiB' size='2048'>0</pages>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <distances>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <sibling id='0' value='10'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           </distances>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           <cpus num='8'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:           </cpus>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         </cell>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </cells>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </topology>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <cache>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </cache>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <secmodel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model>selinux</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <doi>0</doi>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </secmodel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <secmodel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model>dac</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <doi>0</doi>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </secmodel>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   </host>
Jan 26 16:27:18 compute-0 nova_compute[184474]: 
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <guest>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <os_type>hvm</os_type>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <arch name='i686'>
Jan 26 16:27:18 compute-0 sudo[185323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <wordsize>32</wordsize>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <domain type='qemu'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <domain type='kvm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </arch>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <pae/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <nonpae/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <acpi default='on' toggle='yes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <apic default='on' toggle='no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <cpuselection/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <deviceboot/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <disksnapshot default='on' toggle='no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <externalSnapshot/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   </guest>
Jan 26 16:27:18 compute-0 nova_compute[184474]: 
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <guest>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <os_type>hvm</os_type>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <arch name='x86_64'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <wordsize>64</wordsize>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <domain type='qemu'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <domain type='kvm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </arch>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <acpi default='on' toggle='yes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <apic default='on' toggle='no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <cpuselection/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <deviceboot/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <disksnapshot default='on' toggle='no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <externalSnapshot/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </features>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   </guest>
Jan 26 16:27:18 compute-0 nova_compute[184474]: 
Jan 26 16:27:18 compute-0 nova_compute[184474]: </capabilities>
Jan 26 16:27:18 compute-0 nova_compute[184474]: 
Jan 26 16:27:18 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.931 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 16:27:18 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.957 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 16:27:18 compute-0 nova_compute[184474]: <domainCapabilities>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <domain>kvm</domain>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <arch>i686</arch>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <vcpu max='240'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <iothreads supported='yes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <os supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <enum name='firmware'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <loader supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>rom</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>pflash</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <enum name='readonly'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>yes</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <enum name='secure'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </loader>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   </os>
Jan 26 16:27:18 compute-0 nova_compute[184474]:   <cpu>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <enum name='maximumMigratable'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <vendor>AMD</vendor>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='succor'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:18 compute-0 nova_compute[184474]:     <mode name='custom' supported='yes'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cooperlake'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Denverton'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Denverton-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Denverton-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Denverton-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='EPYC-v5'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Haswell-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='IvyBridge'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='KnightsMill'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:18 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:18 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <memoryBacking supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='sourceType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>anonymous</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>memfd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </memoryBacking>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <disk supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='diskDevice'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>disk</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cdrom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>floppy</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>lun</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ide</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>fdc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>sata</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </disk>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <graphics supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vnc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egl-headless</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </graphics>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <video supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='modelType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vga</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cirrus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>none</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>bochs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ramfb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </video>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hostdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='mode'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>subsystem</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='startupPolicy'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>mandatory</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>requisite</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>optional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='subsysType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pci</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='capsType'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='pciBackend'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hostdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <rng supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>random</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </rng>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <filesystem supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='driverType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>path</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>handle</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtiofs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </filesystem>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tpm supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-tis</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-crb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emulator</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>external</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendVersion'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>2.0</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </tpm>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <redirdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </redirdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <channel supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </channel>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <crypto supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </crypto>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <interface supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>passt</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </interface>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <panic supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>isa</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>hyperv</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </panic>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <console supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>null</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dev</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pipe</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stdio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>udp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tcp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu-vdagent</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </console>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <features>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <gic supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <genid supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backup supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <async-teardown supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <s390-pv supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <ps2 supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tdx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sev supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sgx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hyperv supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='features'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>relaxed</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vapic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>spinlocks</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vpindex</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>runtime</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>synic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stimer</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reset</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vendor_id</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>frequencies</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reenlightenment</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tlbflush</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ipi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>avic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emsr_bitmap</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>xmm_input</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hyperv>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <launchSecurity supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </features>
Jan 26 16:27:19 compute-0 nova_compute[184474]: </domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:18.966 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 16:27:19 compute-0 nova_compute[184474]: <domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <domain>kvm</domain>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <arch>i686</arch>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <vcpu max='4096'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <iothreads supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <os supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='firmware'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <loader supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>rom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pflash</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='readonly'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>yes</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='secure'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </loader>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </os>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='maximumMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <vendor>AMD</vendor>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='succor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='custom' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <memoryBacking supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='sourceType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>anonymous</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>memfd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </memoryBacking>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <disk supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='diskDevice'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>disk</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cdrom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>floppy</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>lun</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>fdc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>sata</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </disk>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <graphics supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vnc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egl-headless</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </graphics>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <video supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='modelType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vga</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cirrus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>none</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>bochs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ramfb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </video>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hostdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='mode'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>subsystem</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='startupPolicy'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>mandatory</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>requisite</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>optional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='subsysType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pci</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='capsType'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='pciBackend'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hostdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <rng supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>random</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </rng>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <filesystem supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='driverType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>path</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>handle</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtiofs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </filesystem>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tpm supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-tis</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-crb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emulator</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>external</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendVersion'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>2.0</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </tpm>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <redirdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </redirdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <channel supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </channel>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <crypto supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </crypto>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <interface supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>passt</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </interface>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <panic supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>isa</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>hyperv</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </panic>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <console supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>null</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dev</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pipe</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stdio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>udp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tcp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu-vdagent</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </console>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <features>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <gic supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <genid supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backup supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <async-teardown supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <s390-pv supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <ps2 supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tdx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sev supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sgx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hyperv supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='features'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>relaxed</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vapic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>spinlocks</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vpindex</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>runtime</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>synic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stimer</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reset</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vendor_id</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>frequencies</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reenlightenment</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tlbflush</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ipi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>avic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emsr_bitmap</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>xmm_input</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hyperv>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <launchSecurity supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </features>
Jan 26 16:27:19 compute-0 nova_compute[184474]: </domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.033 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.039 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 16:27:19 compute-0 nova_compute[184474]: <domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <domain>kvm</domain>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <arch>x86_64</arch>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <vcpu max='240'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <iothreads supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <os supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='firmware'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <loader supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>rom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pflash</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='readonly'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>yes</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='secure'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </loader>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </os>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='maximumMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <vendor>AMD</vendor>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='succor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='custom' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <memoryBacking supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='sourceType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>anonymous</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>memfd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </memoryBacking>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <disk supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='diskDevice'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>disk</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cdrom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>floppy</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>lun</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ide</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>fdc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>sata</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </disk>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <graphics supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vnc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egl-headless</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </graphics>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <video supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='modelType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vga</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cirrus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>none</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>bochs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ramfb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </video>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hostdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='mode'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>subsystem</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='startupPolicy'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>mandatory</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>requisite</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>optional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='subsysType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pci</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='capsType'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='pciBackend'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hostdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <rng supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>random</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </rng>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <filesystem supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='driverType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>path</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>handle</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtiofs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </filesystem>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tpm supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-tis</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-crb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emulator</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>external</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendVersion'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>2.0</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </tpm>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <redirdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </redirdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <channel supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </channel>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <crypto supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </crypto>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <interface supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>passt</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </interface>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <panic supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>isa</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>hyperv</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </panic>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <console supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>null</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dev</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pipe</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stdio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>udp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tcp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu-vdagent</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </console>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <features>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <gic supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <genid supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backup supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <async-teardown supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <s390-pv supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <ps2 supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tdx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sev supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sgx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hyperv supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='features'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>relaxed</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vapic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>spinlocks</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vpindex</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>runtime</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>synic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stimer</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reset</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vendor_id</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>frequencies</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reenlightenment</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tlbflush</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ipi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>avic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emsr_bitmap</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>xmm_input</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hyperv>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <launchSecurity supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </features>
Jan 26 16:27:19 compute-0 nova_compute[184474]: </domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.134 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 16:27:19 compute-0 nova_compute[184474]: <domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <domain>kvm</domain>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <arch>x86_64</arch>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <vcpu max='4096'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <iothreads supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <os supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='firmware'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>efi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <loader supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>rom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pflash</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='readonly'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>yes</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='secure'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>yes</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>no</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </loader>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </os>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='maximumMigratable'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>on</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>off</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <vendor>AMD</vendor>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='succor'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <mode name='custom' supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ddpd-u'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sha512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm3'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sm4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Denverton-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amd-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='auto-ibrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='perfmon-v2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbpb'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='stibp-always-on'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='EPYC-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-128'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-256'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx10-512'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='prefetchiti'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Haswell-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512er'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512pf'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fma4'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tbm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xop'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='amx-tile'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-bf16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-fp16'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bitalg'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrc'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fzrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='la57'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='taa-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ifma'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cmpccxadd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fbsdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='fsrs'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ibrs-all'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='intel-psfd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='lam'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mcdt-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pbrsb-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='psdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='serialize'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vaes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='hle'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='rtm'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512bw'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512cd'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512dq'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512f'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='avx512vl'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='invpcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pcid'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='pku'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='mpx'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='core-capability'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='split-lock-detect'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='cldemote'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='erms'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='gfni'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdir64b'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='movdiri'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='xsaves'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='athlon-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='core2duo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='coreduo-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='n270-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='ss'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <blockers model='phenom-v1'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnow'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <feature name='3dnowext'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </blockers>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </mode>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </cpu>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <memoryBacking supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <enum name='sourceType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>anonymous</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <value>memfd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </memoryBacking>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <disk supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='diskDevice'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>disk</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cdrom</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>floppy</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>lun</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>fdc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>sata</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </disk>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <graphics supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vnc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egl-headless</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </graphics>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <video supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='modelType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vga</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>cirrus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>none</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>bochs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ramfb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </video>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hostdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='mode'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>subsystem</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='startupPolicy'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>mandatory</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>requisite</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>optional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='subsysType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pci</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>scsi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='capsType'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='pciBackend'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hostdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <rng supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtio-non-transitional</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>random</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>egd</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </rng>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <filesystem supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='driverType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>path</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>handle</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>virtiofs</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </filesystem>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tpm supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-tis</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tpm-crb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emulator</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>external</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendVersion'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>2.0</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </tpm>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <redirdev supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='bus'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>usb</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </redirdev>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <channel supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </channel>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <crypto supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendModel'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>builtin</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </crypto>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <interface supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='backendType'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>default</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>passt</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </interface>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <panic supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='model'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>isa</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>hyperv</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </panic>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <console supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='type'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>null</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vc</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pty</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dev</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>file</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>pipe</value>
Jan 26 16:27:19 compute-0 python3.9[185326]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stdio</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>udp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tcp</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>unix</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>qemu-vdagent</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>dbus</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </console>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </devices>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   <features>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <gic supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <genid supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <backup supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <async-teardown supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <s390-pv supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <ps2 supported='yes'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <tdx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sev supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <sgx supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <hyperv supported='yes'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <enum name='features'>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>relaxed</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vapic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>spinlocks</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vpindex</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>runtime</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>synic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>stimer</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reset</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>vendor_id</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>frequencies</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>reenlightenment</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>tlbflush</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>ipi</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>avic</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>emsr_bitmap</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <value>xmm_input</value>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </enum>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       <defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:19 compute-0 nova_compute[184474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:19 compute-0 nova_compute[184474]:       </defaults>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     </hyperv>
Jan 26 16:27:19 compute-0 nova_compute[184474]:     <launchSecurity supported='no'/>
Jan 26 16:27:19 compute-0 nova_compute[184474]:   </features>
Jan 26 16:27:19 compute-0 nova_compute[184474]: </domainCapabilities>
Jan 26 16:27:19 compute-0 nova_compute[184474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.206 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.206 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.206 184478 DEBUG nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.212 184478 INFO nova.virt.libvirt.host [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Secure Boot support detected
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.215 184478 INFO nova.virt.libvirt.driver [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.215 184478 INFO nova.virt.libvirt.driver [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.229 184478 DEBUG nova.virt.libvirt.driver [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.290 184478 INFO nova.virt.node [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Determined node identity b0bb5d31-f35b-4a04-b67d-66acc24fb822 from /var/lib/nova/compute_id
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.320 184478 WARNING nova.compute.manager [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Compute nodes ['b0bb5d31-f35b-4a04-b67d-66acc24fb822'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 26 16:27:19 compute-0 systemd[1]: Stopping nova_compute container...
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.362 184478 INFO nova.compute.manager [None req-78ff9f39-c0b8-4479-b839-627f81abe678 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.411 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.411 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.412 184478 DEBUG oslo_concurrency.lockutils [None req-3e461bee-6e3e-4750-8de6-a94f25dee123 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:19 compute-0 nova_compute[184474]: 2026-01-26 16:27:19.413 184478 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 9685ba1719924732bf98698c36c0fc14
Jan 26 16:27:19 compute-0 virtqemud[185114]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 26 16:27:19 compute-0 virtqemud[185114]: hostname: compute-0
Jan 26 16:27:19 compute-0 virtqemud[185114]: End of file while reading data: Input/output error
Jan 26 16:27:19 compute-0 systemd[1]: libpod-6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e.scope: Deactivated successfully.
Jan 26 16:27:19 compute-0 systemd[1]: libpod-6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e.scope: Consumed 3.285s CPU time.
Jan 26 16:27:19 compute-0 podman[185333]: 2026-01-26 16:27:19.832256287 +0000 UTC m=+0.480418428 container died 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute)
Jan 26 16:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd-merged.mount: Deactivated successfully.
Jan 26 16:27:19 compute-0 podman[185333]: 2026-01-26 16:27:19.916174281 +0000 UTC m=+0.564336402 container cleanup 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:27:19 compute-0 podman[185333]: nova_compute
Jan 26 16:27:19 compute-0 podman[185361]: nova_compute
Jan 26 16:27:19 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 26 16:27:19 compute-0 systemd[1]: Stopped nova_compute container.
Jan 26 16:27:20 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 16:27:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ae2abc4d5cc1ae56cde84c8eb7e34cd36a46cf10e524313a32d66d92d7fedd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:22 compute-0 podman[185374]: 2026-01-26 16:27:22.344922357 +0000 UTC m=+2.319406266 container init 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:27:22 compute-0 podman[185374]: 2026-01-26 16:27:22.351679942 +0000 UTC m=+2.326163821 container start 6e5e7883c98a035cfcc89c1bbf1d83befcf123f062837eb3b9cd1caf4b1af30e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2)
Jan 26 16:27:22 compute-0 podman[185374]: nova_compute
Jan 26 16:27:22 compute-0 nova_compute[185389]: + sudo -E kolla_set_configs
Jan 26 16:27:22 compute-0 systemd[1]: Started nova_compute container.
Jan 26 16:27:22 compute-0 sudo[185323]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Validating config file
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying service configuration files
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /etc/ceph
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Creating directory /etc/ceph
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Writing out command to execute
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:22 compute-0 nova_compute[185389]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 16:27:22 compute-0 nova_compute[185389]: ++ cat /run_command
Jan 26 16:27:22 compute-0 nova_compute[185389]: + CMD=nova-compute
Jan 26 16:27:22 compute-0 nova_compute[185389]: + ARGS=
Jan 26 16:27:22 compute-0 nova_compute[185389]: + sudo kolla_copy_cacerts
Jan 26 16:27:22 compute-0 nova_compute[185389]: + [[ ! -n '' ]]
Jan 26 16:27:22 compute-0 nova_compute[185389]: + . kolla_extend_start
Jan 26 16:27:22 compute-0 nova_compute[185389]: Running command: 'nova-compute'
Jan 26 16:27:22 compute-0 nova_compute[185389]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 16:27:22 compute-0 nova_compute[185389]: + umask 0022
Jan 26 16:27:22 compute-0 nova_compute[185389]: + exec nova-compute
Jan 26 16:27:22 compute-0 sudo[185550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvizxbqljnmywzwgvasczphsgneewvbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444842.6178172-1287-37921325891375/AnsiballZ_podman_container.py'
Jan 26 16:27:22 compute-0 sudo[185550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:23 compute-0 python3.9[185552]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 16:27:23 compute-0 systemd[1]: Started libpod-conmon-37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60.scope.
Jan 26 16:27:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82b770f5fbdbaed6fd4112385f8283985f4165a812af67e3c07507354875090/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82b770f5fbdbaed6fd4112385f8283985f4165a812af67e3c07507354875090/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82b770f5fbdbaed6fd4112385f8283985f4165a812af67e3c07507354875090/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:23 compute-0 podman[185577]: 2026-01-26 16:27:23.421202969 +0000 UTC m=+0.132746185 container init 37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202)
Jan 26 16:27:23 compute-0 podman[185577]: 2026-01-26 16:27:23.433443815 +0000 UTC m=+0.144987011 container start 37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:27:23 compute-0 python3.9[185552]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Applying nova statedir ownership
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 26 16:27:23 compute-0 nova_compute_init[185600]: INFO:nova_statedir:Nova statedir ownership complete
Jan 26 16:27:23 compute-0 systemd[1]: libpod-37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60.scope: Deactivated successfully.
Jan 26 16:27:23 compute-0 podman[185614]: 2026-01-26 16:27:23.553821159 +0000 UTC m=+0.031178317 container died 37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60-userdata-shm.mount: Deactivated successfully.
Jan 26 16:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d82b770f5fbdbaed6fd4112385f8283985f4165a812af67e3c07507354875090-merged.mount: Deactivated successfully.
Jan 26 16:27:23 compute-0 sudo[185550]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:23 compute-0 podman[185614]: 2026-01-26 16:27:23.601456957 +0000 UTC m=+0.078814105 container cleanup 37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251202, config_id=edpm)
Jan 26 16:27:23 compute-0 systemd[1]: libpod-conmon-37cfd9d6a93056a37021c0da53dcf9d4de32dc54e0f9e526cca02f8cf41f4a60.scope: Deactivated successfully.
Jan 26 16:27:24 compute-0 sshd-session[162323]: Connection closed by 192.168.122.30 port 47220
Jan 26 16:27:24 compute-0 sshd-session[162320]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:27:24 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 26 16:27:24 compute-0 systemd[1]: session-24.scope: Consumed 1min 45.074s CPU time.
Jan 26 16:27:24 compute-0 systemd-logind[788]: Session 24 logged out. Waiting for processes to exit.
Jan 26 16:27:24 compute-0 systemd-logind[788]: Removed session 24.
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.530 185393 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.530 185393 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.530 185393 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.531 185393 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.678 185393 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.709 185393 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:24 compute-0 nova_compute[185389]: 2026-01-26 16:27:24.709 185393 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.777 185393 INFO nova.virt.driver [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.899 185393 INFO nova.compute.provider_config [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.979 185393 DEBUG oslo_concurrency.lockutils [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.979 185393 DEBUG oslo_concurrency.lockutils [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.980 185393 DEBUG oslo_concurrency.lockutils [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.980 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.981 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.981 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.981 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.981 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.981 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.982 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.982 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.982 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.982 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.982 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.983 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.983 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.983 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.983 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.984 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.984 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.984 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.984 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.984 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.985 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.985 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.985 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.985 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.986 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.986 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.986 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.987 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.987 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.987 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.987 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.988 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.988 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.988 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.988 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.988 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.989 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.989 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.989 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.989 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.990 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.990 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.990 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.990 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.990 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.991 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.991 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.991 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.991 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.992 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.992 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.992 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.992 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.992 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.993 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.993 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.993 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.993 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.993 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.994 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.994 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.994 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.994 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.994 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.995 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.995 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.995 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.995 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.996 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.996 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.996 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.996 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.996 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.997 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.997 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.997 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.997 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.997 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.998 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.998 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.998 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.998 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.998 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.999 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.999 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:25 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.999 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:25.999 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.000 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.000 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.000 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.000 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.001 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.001 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.001 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.001 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.002 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.002 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.002 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.002 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.003 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.003 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.003 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.003 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.003 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.004 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.004 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.004 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.004 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.004 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.005 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.005 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.005 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.005 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.006 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.006 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.006 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.006 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.006 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.007 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.007 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.007 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.007 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.007 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.008 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.008 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.008 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.008 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.008 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.009 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.009 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.009 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.009 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.009 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.010 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.010 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.010 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.010 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.010 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.011 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.011 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.011 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.011 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.011 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.012 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.012 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.012 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.012 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.012 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.013 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.013 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.013 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.013 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.014 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.014 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.014 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.014 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.014 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.015 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.015 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.015 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.015 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.016 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.016 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.016 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.016 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.016 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.017 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.017 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.017 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.017 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.018 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.018 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.018 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.018 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.018 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.019 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.019 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.019 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.019 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.020 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.020 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.020 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.020 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.020 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.021 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.021 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.021 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.022 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.022 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.022 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.022 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.023 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.023 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.023 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.023 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.023 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.024 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.024 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.024 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.025 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.025 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.025 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.025 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.025 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.026 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.026 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.026 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.026 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.027 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.027 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.027 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.027 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.027 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.028 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.028 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.028 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.028 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.028 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.029 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.029 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.029 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.029 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.030 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.030 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.030 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.030 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.031 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.031 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.031 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.031 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.031 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.032 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.032 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.032 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.032 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.033 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.033 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.033 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.033 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.034 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.034 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.034 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.034 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.034 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.035 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.035 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.035 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.035 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.036 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.036 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.036 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.036 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.037 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.037 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.037 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.037 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.037 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.038 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.038 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.038 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.038 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.038 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.039 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.039 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.039 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.039 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.039 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.040 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.040 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.040 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.040 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.041 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.042 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.043 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.043 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.043 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.043 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.043 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.044 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.045 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.045 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.045 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.045 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.045 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.046 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.046 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.046 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.046 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.046 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.047 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.048 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.049 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.050 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.051 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.052 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.053 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.054 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.055 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.056 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.057 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.057 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.057 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.057 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.057 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.058 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.058 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.058 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.058 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.058 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.059 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.060 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.060 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.060 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.060 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.060 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.061 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.062 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.063 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.064 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.065 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.066 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.067 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.068 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.069 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.070 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.071 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.072 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.073 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.074 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 WARNING oslo_config.cfg [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 16:27:26 compute-0 nova_compute[185389]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 16:27:26 compute-0 nova_compute[185389]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 16:27:26 compute-0 nova_compute[185389]: and ``live_migration_inbound_addr`` respectively.
Jan 26 16:27:26 compute-0 nova_compute[185389]: ).  Its value may be silently ignored in the future.
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.075 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.076 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.077 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.078 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.079 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.080 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.081 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.082 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.083 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.084 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.085 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.086 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.087 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.088 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.089 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.090 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.091 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.091 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.091 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.091 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.091 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.092 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.093 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.094 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.095 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.096 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.097 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.098 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.099 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.100 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.101 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.102 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.103 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.104 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.105 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.106 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.107 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.108 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.109 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.110 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.111 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.112 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.113 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.113 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.113 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.113 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.113 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.114 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.115 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.116 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.117 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.118 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.119 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.120 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.121 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.122 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.123 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.124 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.125 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.126 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.127 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.128 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.129 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.130 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.131 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.132 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.133 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.134 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.135 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.136 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.137 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.138 185393 DEBUG oslo_service.service [None req-377cc5d4-4e56-467d-bdc4-57e13227293c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.139 185393 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.176 185393 INFO nova.virt.node [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Determined node identity b0bb5d31-f35b-4a04-b67d-66acc24fb822 from /var/lib/nova/compute_id
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.177 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.177 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.177 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.178 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.192 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbaa94db850> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.194 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbaa94db850> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.195 185393 INFO nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Connection event '1' reason 'None'
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.202 185393 INFO nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]: 
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <host>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <uuid>07141d90-ae2c-4848-91d9-402155316ee1</uuid>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <arch>x86_64</arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model>EPYC-Rome-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <vendor>AMD</vendor>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <microcode version='16777317'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <signature family='23' model='49' stepping='0'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='x2apic'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='tsc-deadline'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='osxsave'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='hypervisor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='tsc_adjust'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='spec-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='stibp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='arch-capabilities'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='cmp_legacy'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='topoext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='virt-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='lbrv'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='tsc-scale'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='vmcb-clean'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='pause-filter'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='pfthreshold'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='svme-addr-chk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='rdctl-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='skip-l1dfl-vmentry'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='mds-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature name='pschange-mc-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <pages unit='KiB' size='4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <pages unit='KiB' size='2048'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <pages unit='KiB' size='1048576'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <power_management>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <suspend_mem/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <suspend_disk/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <suspend_hybrid/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </power_management>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <iommu support='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <migration_features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <live/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <uri_transports>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <uri_transport>tcp</uri_transport>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <uri_transport>rdma</uri_transport>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </uri_transports>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </migration_features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <topology>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <cells num='1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <cell id='0'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <memory unit='KiB'>7864316</memory>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <pages unit='KiB' size='2048'>0</pages>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <distances>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <sibling id='0' value='10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           </distances>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           <cpus num='8'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:           </cpus>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         </cell>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </cells>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </topology>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <cache>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </cache>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <secmodel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model>selinux</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <doi>0</doi>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </secmodel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <secmodel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model>dac</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <doi>0</doi>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </secmodel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </host>
Jan 26 16:27:26 compute-0 nova_compute[185389]: 
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <guest>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <os_type>hvm</os_type>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <arch name='i686'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <wordsize>32</wordsize>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <domain type='qemu'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <domain type='kvm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <pae/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <nonpae/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <acpi default='on' toggle='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <apic default='on' toggle='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <cpuselection/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <deviceboot/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <disksnapshot default='on' toggle='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <externalSnapshot/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </guest>
Jan 26 16:27:26 compute-0 nova_compute[185389]: 
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <guest>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <os_type>hvm</os_type>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <arch name='x86_64'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <wordsize>64</wordsize>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <domain type='qemu'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <domain type='kvm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <acpi default='on' toggle='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <apic default='on' toggle='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <cpuselection/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <deviceboot/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <disksnapshot default='on' toggle='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <externalSnapshot/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </guest>
Jan 26 16:27:26 compute-0 nova_compute[185389]: 
Jan 26 16:27:26 compute-0 nova_compute[185389]: </capabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]: 
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.212 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.220 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 16:27:26 compute-0 nova_compute[185389]: <domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <domain>kvm</domain>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <arch>i686</arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <vcpu max='240'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <iothreads supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <os supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='firmware'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <loader supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>rom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pflash</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='readonly'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>yes</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='secure'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </loader>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </os>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='maximumMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <vendor>AMD</vendor>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='succor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='custom' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <memoryBacking supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='sourceType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>anonymous</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>memfd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </memoryBacking>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <disk supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='diskDevice'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>disk</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cdrom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>floppy</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>lun</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ide</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>fdc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>sata</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <graphics supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vnc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egl-headless</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </graphics>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <video supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='modelType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vga</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cirrus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>none</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>bochs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ramfb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </video>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hostdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='mode'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>subsystem</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='startupPolicy'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>mandatory</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>requisite</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>optional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='subsysType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pci</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='capsType'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='pciBackend'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hostdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <rng supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>random</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <filesystem supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='driverType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>path</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>handle</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtiofs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </filesystem>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tpm supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-tis</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-crb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emulator</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>external</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendVersion'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>2.0</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </tpm>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <redirdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </redirdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <channel supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </channel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <crypto supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </crypto>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <interface supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>passt</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <panic supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>isa</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>hyperv</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </panic>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <console supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>null</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dev</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pipe</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stdio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>udp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tcp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu-vdagent</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </console>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <gic supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <genid supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backup supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <async-teardown supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <s390-pv supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <ps2 supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tdx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sev supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sgx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hyperv supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='features'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>relaxed</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vapic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>spinlocks</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vpindex</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>runtime</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>synic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stimer</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reset</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vendor_id</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>frequencies</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reenlightenment</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tlbflush</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ipi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>avic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emsr_bitmap</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>xmm_input</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hyperv>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <launchSecurity supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]: </domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.227 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 16:27:26 compute-0 nova_compute[185389]: <domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <domain>kvm</domain>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <arch>i686</arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <vcpu max='4096'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <iothreads supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <os supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='firmware'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <loader supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>rom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pflash</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='readonly'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>yes</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='secure'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </loader>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </os>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='maximumMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <vendor>AMD</vendor>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='succor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='custom' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <memoryBacking supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='sourceType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>anonymous</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>memfd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </memoryBacking>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <disk supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='diskDevice'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>disk</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cdrom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>floppy</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>lun</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>fdc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>sata</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <graphics supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vnc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egl-headless</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </graphics>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <video supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='modelType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vga</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cirrus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>none</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>bochs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ramfb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </video>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hostdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='mode'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>subsystem</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='startupPolicy'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>mandatory</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>requisite</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>optional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='subsysType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pci</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='capsType'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='pciBackend'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hostdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <rng supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>random</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <filesystem supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='driverType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>path</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>handle</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtiofs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </filesystem>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tpm supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-tis</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-crb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emulator</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>external</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendVersion'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>2.0</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </tpm>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <redirdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </redirdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <channel supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </channel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <crypto supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </crypto>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <interface supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>passt</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <panic supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>isa</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>hyperv</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </panic>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <console supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>null</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dev</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pipe</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stdio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>udp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tcp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu-vdagent</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </console>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <gic supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <genid supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backup supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <async-teardown supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <s390-pv supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <ps2 supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tdx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sev supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sgx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hyperv supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='features'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>relaxed</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vapic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>spinlocks</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vpindex</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>runtime</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>synic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stimer</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reset</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vendor_id</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>frequencies</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reenlightenment</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tlbflush</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ipi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>avic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emsr_bitmap</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>xmm_input</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hyperv>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <launchSecurity supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]: </domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.328 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.332 185393 DEBUG nova.virt.libvirt.volume.mount [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.337 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 16:27:26 compute-0 nova_compute[185389]: <domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <domain>kvm</domain>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <arch>x86_64</arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <vcpu max='240'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <iothreads supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <os supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='firmware'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <loader supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>rom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pflash</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='readonly'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>yes</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='secure'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </loader>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </os>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='maximumMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <vendor>AMD</vendor>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='succor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='custom' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <memoryBacking supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='sourceType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>anonymous</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>memfd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </memoryBacking>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <disk supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='diskDevice'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>disk</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cdrom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>floppy</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>lun</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ide</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>fdc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>sata</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <graphics supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vnc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egl-headless</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </graphics>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <video supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='modelType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vga</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cirrus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>none</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>bochs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ramfb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </video>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hostdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='mode'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>subsystem</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='startupPolicy'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>mandatory</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>requisite</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>optional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='subsysType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pci</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='capsType'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='pciBackend'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hostdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <rng supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>random</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <filesystem supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='driverType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>path</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>handle</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtiofs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </filesystem>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tpm supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-tis</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-crb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emulator</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>external</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendVersion'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>2.0</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </tpm>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <redirdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </redirdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <channel supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </channel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <crypto supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </crypto>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <interface supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>passt</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <panic supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>isa</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>hyperv</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </panic>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <console supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>null</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dev</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pipe</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stdio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>udp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tcp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu-vdagent</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </console>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <gic supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <genid supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backup supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <async-teardown supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <s390-pv supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <ps2 supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tdx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sev supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sgx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hyperv supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='features'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>relaxed</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vapic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>spinlocks</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vpindex</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>runtime</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>synic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stimer</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reset</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vendor_id</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>frequencies</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reenlightenment</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tlbflush</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ipi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>avic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emsr_bitmap</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>xmm_input</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hyperv>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <launchSecurity supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]: </domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.430 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 16:27:26 compute-0 nova_compute[185389]: <domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <domain>kvm</domain>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <arch>x86_64</arch>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <vcpu max='4096'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <iothreads supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <os supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='firmware'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>efi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <loader supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>rom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pflash</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='readonly'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>yes</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='secure'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>yes</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>no</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </loader>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </os>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-passthrough' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='hostPassthroughMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='maximum' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='maximumMigratable'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>on</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>off</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='host-model' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <vendor>AMD</vendor>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='x2apic'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='hypervisor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='stibp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='overflow-recov'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='succor'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lbrv'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='tsc-scale'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='flushbyasid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pause-filter'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='pfthreshold'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <feature policy='disable' name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <mode name='custom' supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Broadwell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='ClearwaterForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ddpd-u'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sha512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm3'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sm4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Cooperlake-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Denverton-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Dhyana-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Milan-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Rome-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-Turin-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amd-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='auto-ibrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vp2intersect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fs-gs-base-ns'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibpb-brtype'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='no-nested-data-bp'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='null-sel-clr-base'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='perfmon-v2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbpb'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='srso-user-kernel-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='stibp-always-on'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='EPYC-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='GraniteRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-128'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-256'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx10-512'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='prefetchiti'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Haswell-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v6'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Icelake-Server-v7'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='IvyBridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='KnightsMill-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4fmaps'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-4vnniw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512er'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512pf'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G4-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Opteron_G5-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fma4'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tbm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xop'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SapphireRapids-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='amx-tile'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-bf16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-fp16'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512-vpopcntdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bitalg'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vbmi2'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrc'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fzrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='la57'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='taa-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='tsx-ldtrk'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='SierraForest-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ifma'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-ne-convert'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx-vnni-int8'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bhi-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='bus-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cmpccxadd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fbsdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='fsrs'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ibrs-all'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='intel-psfd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ipred-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='lam'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mcdt-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pbrsb-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='psdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rrsba-ctrl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='sbdr-ssdp-no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='serialize'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vaes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='vpclmulqdq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Client-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='hle'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='rtm'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Skylake-Server-v5'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512bw'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512cd'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512dq'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512f'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='avx512vl'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='invpcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pcid'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='pku'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='mpx'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v2'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v3'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='core-capability'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='split-lock-detect'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='Snowridge-v4'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='cldemote'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='erms'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='gfni'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdir64b'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='movdiri'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='xsaves'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='athlon-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='core2duo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='coreduo-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='n270-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='ss'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <blockers model='phenom-v1'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnow'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <feature name='3dnowext'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </blockers>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </mode>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <memoryBacking supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <enum name='sourceType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>anonymous</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <value>memfd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </memoryBacking>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <disk supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='diskDevice'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>disk</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cdrom</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>floppy</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>lun</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>fdc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>sata</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <graphics supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vnc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egl-headless</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </graphics>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <video supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='modelType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vga</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>cirrus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>none</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>bochs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ramfb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </video>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hostdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='mode'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>subsystem</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='startupPolicy'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>mandatory</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>requisite</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>optional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='subsysType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pci</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>scsi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='capsType'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='pciBackend'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hostdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <rng supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtio-non-transitional</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>random</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>egd</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <filesystem supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='driverType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>path</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>handle</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>virtiofs</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </filesystem>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tpm supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-tis</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tpm-crb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emulator</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>external</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendVersion'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>2.0</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </tpm>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <redirdev supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='bus'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>usb</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </redirdev>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <channel supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </channel>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <crypto supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendModel'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>builtin</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </crypto>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <interface supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='backendType'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>default</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>passt</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <panic supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='model'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>isa</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>hyperv</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </panic>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <console supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='type'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>null</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vc</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pty</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dev</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>file</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>pipe</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stdio</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>udp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tcp</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>unix</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>qemu-vdagent</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>dbus</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </console>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   <features>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <gic supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <vmcoreinfo supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <genid supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backingStoreInput supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <backup supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <async-teardown supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <s390-pv supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <ps2 supported='yes'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <tdx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sev supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <sgx supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <hyperv supported='yes'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <enum name='features'>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>relaxed</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vapic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>spinlocks</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vpindex</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>runtime</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>synic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>stimer</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reset</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>vendor_id</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>frequencies</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>reenlightenment</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>tlbflush</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>ipi</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>avic</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>emsr_bitmap</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <value>xmm_input</value>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </enum>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       <defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <spinlocks>4095</spinlocks>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <stimer_direct>on</stimer_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 16:27:26 compute-0 nova_compute[185389]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 16:27:26 compute-0 nova_compute[185389]:       </defaults>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     </hyperv>
Jan 26 16:27:26 compute-0 nova_compute[185389]:     <launchSecurity supported='no'/>
Jan 26 16:27:26 compute-0 nova_compute[185389]:   </features>
Jan 26 16:27:26 compute-0 nova_compute[185389]: </domainCapabilities>
Jan 26 16:27:26 compute-0 nova_compute[185389]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.522 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.523 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.523 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.530 185393 INFO nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Secure Boot support detected
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.533 185393 INFO nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.533 185393 INFO nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.548 185393 DEBUG nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.584 185393 INFO nova.virt.node [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Determined node identity b0bb5d31-f35b-4a04-b67d-66acc24fb822 from /var/lib/nova/compute_id
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.609 185393 WARNING nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Compute nodes ['b0bb5d31-f35b-4a04-b67d-66acc24fb822'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.656 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.679 185393 WARNING nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.680 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.681 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.682 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:26 compute-0 nova_compute[185389]: 2026-01-26 16:27:26.682 185393 DEBUG nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:27:26 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 16:27:26 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.017 185393 WARNING nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.018 185393 DEBUG nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6051MB free_disk=72.65243148803711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.018 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.018 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.049 185393 WARNING nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] No compute node record for compute-0.ctlplane.example.com:b0bb5d31-f35b-4a04-b67d-66acc24fb822: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b0bb5d31-f35b-4a04-b67d-66acc24fb822 could not be found.
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.075 185393 INFO nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: b0bb5d31-f35b-4a04-b67d-66acc24fb822
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.141 185393 DEBUG nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:27:27 compute-0 nova_compute[185389]: 2026-01-26 16:27:27.141 185393 DEBUG nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.249 185393 INFO nova.scheduler.client.report [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [req-84dbf373-622b-4232-ba32-ef21ee4b96f1] Created resource provider record via placement API for resource provider with UUID b0bb5d31-f35b-4a04-b67d-66acc24fb822 and name compute-0.ctlplane.example.com.
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.693 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 26 16:27:28 compute-0 nova_compute[185389]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.694 185393 INFO nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] kernel doesn't support AMD SEV
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.694 185393 DEBUG nova.compute.provider_tree [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.695 185393 DEBUG nova.virt.libvirt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.788 185393 DEBUG nova.scheduler.client.report [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Updated inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.789 185393 DEBUG nova.compute.provider_tree [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Updating resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.789 185393 DEBUG nova.compute.provider_tree [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.917 185393 DEBUG nova.compute.provider_tree [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Updating resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.974 185393 DEBUG nova.compute.resource_tracker [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.974 185393 DEBUG oslo_concurrency.lockutils [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:28 compute-0 nova_compute[185389]: 2026-01-26 16:27:28.974 185393 DEBUG nova.service [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 26 16:27:29 compute-0 nova_compute[185389]: 2026-01-26 16:27:29.604 185393 DEBUG nova.service [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 26 16:27:29 compute-0 nova_compute[185389]: 2026-01-26 16:27:29.604 185393 DEBUG nova.servicegroup.drivers.db [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 26 16:27:30 compute-0 sshd-session[185710]: Accepted publickey for zuul from 192.168.122.30 port 45882 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:27:30 compute-0 systemd-logind[788]: New session 26 of user zuul.
Jan 26 16:27:30 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 26 16:27:30 compute-0 sshd-session[185710]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:27:31 compute-0 python3.9[185863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:27:33 compute-0 sudo[186017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrvegsnbrbjmtrzdgpledgshrvofwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444852.5941665-31-105512081993408/AnsiballZ_systemd_service.py'
Jan 26 16:27:33 compute-0 sudo[186017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:33 compute-0 python3.9[186019]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:27:33 compute-0 systemd[1]: Reloading.
Jan 26 16:27:33 compute-0 systemd-rc-local-generator[186047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:27:33 compute-0 systemd-sysv-generator[186051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:27:33 compute-0 sudo[186017]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:34 compute-0 python3.9[186204]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:27:35 compute-0 network[186221]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:27:35 compute-0 network[186222]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:27:35 compute-0 network[186223]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:27:37 compute-0 podman[186311]: 2026-01-26 16:27:37.858011559 +0000 UTC m=+0.096445198 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 16:27:38 compute-0 nova_compute[185389]: 2026-01-26 16:27:38.606 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:38 compute-0 nova_compute[185389]: 2026-01-26 16:27:38.638 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:39 compute-0 sudo[186512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heydduxcwrxnmbcvbfjudkeifminciqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444858.759922-50-265142885070477/AnsiballZ_systemd_service.py'
Jan 26 16:27:39 compute-0 sudo[186512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:39 compute-0 python3.9[186514]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:27:39 compute-0 sudo[186512]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:40 compute-0 sudo[186665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouemutsaavdmwoszyivajtezocsshwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444859.6669512-60-188418991961409/AnsiballZ_file.py'
Jan 26 16:27:40 compute-0 sudo[186665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:40 compute-0 python3.9[186667]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:40 compute-0 sudo[186665]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:27:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:27:41 compute-0 sudo[186818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtowejovtunfnjdgfcbcfyshqqhfkyhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444860.7480438-68-68918119231374/AnsiballZ_file.py'
Jan 26 16:27:41 compute-0 sudo[186818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:41 compute-0 python3.9[186820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:41 compute-0 sudo[186818]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:41 compute-0 sudo[186980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwuziazayqyrphkxyruhampydyzjgkzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444861.4828646-77-68024719515804/AnsiballZ_command.py'
Jan 26 16:27:41 compute-0 sudo[186980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:42 compute-0 podman[186944]: 2026-01-26 16:27:42.02217005 +0000 UTC m=+0.126613857 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:27:42 compute-0 python3.9[186986]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:27:42 compute-0 sudo[186980]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:42 compute-0 python3.9[187151]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:27:43 compute-0 sudo[187301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbinixymhwuvvbseeaqkivkewroqfauz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444863.2353442-95-182887622142075/AnsiballZ_systemd_service.py'
Jan 26 16:27:43 compute-0 sudo[187301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:43 compute-0 python3.9[187303]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:27:43 compute-0 systemd[1]: Reloading.
Jan 26 16:27:43 compute-0 systemd-sysv-generator[187336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:27:43 compute-0 systemd-rc-local-generator[187332]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:27:44 compute-0 sudo[187301]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:44 compute-0 sudo[187490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmjdsaxmyqckgmbgrmxpwlmajsmxdfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444864.3446574-103-59289729705231/AnsiballZ_command.py'
Jan 26 16:27:44 compute-0 sudo[187490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:44 compute-0 python3.9[187492]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:27:44 compute-0 sudo[187490]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:45 compute-0 sudo[187644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhdnxpiqofexqvdqhycurccylvzdfoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444865.2135167-112-105779878582189/AnsiballZ_file.py'
Jan 26 16:27:45 compute-0 sudo[187644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:45 compute-0 python3.9[187646]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:27:45 compute-0 sudo[187644]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:46 compute-0 python3.9[187796]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:47 compute-0 sudo[187948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paufvkvjgapxxkhfimnqkpedcttkhqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444866.8337417-128-101943800930807/AnsiballZ_group.py'
Jan 26 16:27:47 compute-0 sudo[187948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:47 compute-0 python3.9[187950]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 26 16:27:47 compute-0 sudo[187948]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:48 compute-0 sudo[188100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erusqcdbgwtyogftjlghfbhjmaepiipr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444868.188848-139-43635640583153/AnsiballZ_getent.py'
Jan 26 16:27:48 compute-0 sudo[188100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:48 compute-0 sshd-session[187415]: Connection reset by 205.210.31.211 port 58502 [preauth]
Jan 26 16:27:48 compute-0 python3.9[188102]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 26 16:27:48 compute-0 sudo[188100]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:49 compute-0 sudo[188253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfhsufyllrglrtyhcckcsxcxxhmfwey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444869.0828807-147-161576640506214/AnsiballZ_group.py'
Jan 26 16:27:49 compute-0 sudo[188253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:49 compute-0 python3.9[188255]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 16:27:49 compute-0 groupadd[188256]: group added to /etc/group: name=ceilometer, GID=42405
Jan 26 16:27:49 compute-0 groupadd[188256]: group added to /etc/gshadow: name=ceilometer
Jan 26 16:27:49 compute-0 groupadd[188256]: new group: name=ceilometer, GID=42405
Jan 26 16:27:49 compute-0 sudo[188253]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:50 compute-0 sudo[188411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnadfcaqksaqbryhyqkuafnownlytals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444869.839212-155-65173338557907/AnsiballZ_user.py'
Jan 26 16:27:50 compute-0 sudo[188411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:27:50 compute-0 python3.9[188413]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 16:27:50 compute-0 useradd[188415]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 16:27:50 compute-0 useradd[188415]: add 'ceilometer' to group 'libvirt'
Jan 26 16:27:50 compute-0 useradd[188415]: add 'ceilometer' to shadow group 'libvirt'
Jan 26 16:27:50 compute-0 sudo[188411]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:51 compute-0 python3.9[188571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:52 compute-0 python3.9[188692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769444871.4473107-181-77066894905324/.source.conf _original_basename=ceilometer.conf follow=False checksum=806b21daa538a66a80669be8bf74c414d178dfbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:53 compute-0 python3.9[188842]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:53 compute-0 python3.9[188963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769444872.9549406-181-80937862816548/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:54 compute-0 python3.9[189113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:55 compute-0 python3.9[189234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769444874.1127274-181-105068646137450/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:27:55 compute-0 python3.9[189384]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:56 compute-0 python3.9[189536]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:27:57 compute-0 python3.9[189688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:57 compute-0 python3.9[189809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444876.649761-240-241720266782059/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:27:58 compute-0 python3.9[189959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:27:59 compute-0 python3.9[190080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444877.854375-240-236480208686193/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:27:59 compute-0 python3.9[190230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:00 compute-0 python3.9[190351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444879.263021-269-267623465869659/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:01 compute-0 python3.9[190501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:01 compute-0 python3.9[190622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444880.666389-285-281046048997749/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:28:01.702 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:28:01.704 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:28:01.704 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:02 compute-0 python3.9[190772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:02 compute-0 python3.9[190893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444881.812223-300-64814289934225/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:03 compute-0 python3.9[191043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:04 compute-0 python3.9[191164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444883.157069-315-168248773131098/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:04 compute-0 sudo[191314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdwouevidfcesnpwaksddakqvxhjbhlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444884.4723592-330-212908276734322/AnsiballZ_file.py'
Jan 26 16:28:04 compute-0 sudo[191314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:04 compute-0 python3.9[191316]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:05 compute-0 sudo[191314]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:05 compute-0 sudo[191466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcvicverfqyvmpfjgigwtusbftmyszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444885.4123316-338-214051277048316/AnsiballZ_file.py'
Jan 26 16:28:05 compute-0 sudo[191466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:05 compute-0 python3.9[191468]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:06 compute-0 sudo[191466]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:06 compute-0 python3.9[191618]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:07 compute-0 python3.9[191770]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:08 compute-0 podman[191873]: 2026-01-26 16:28:08.232824687 +0000 UTC m=+0.093180239 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:08 compute-0 python3.9[191935]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:08 compute-0 sudo[192093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apoxmkdonxwhwefnjyfjrxgwrdigjmod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444888.615508-370-171435551986514/AnsiballZ_file.py'
Jan 26 16:28:08 compute-0 sudo[192093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:09 compute-0 python3.9[192095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:09 compute-0 sudo[192093]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:09 compute-0 sudo[192245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkituqtjairifnxevaxmnzsdmazihuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444889.3440785-378-28240338247153/AnsiballZ_systemd_service.py'
Jan 26 16:28:09 compute-0 sudo[192245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:09 compute-0 python3.9[192247]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:28:10 compute-0 systemd[1]: Reloading.
Jan 26 16:28:10 compute-0 systemd-rc-local-generator[192269]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:28:10 compute-0 systemd-sysv-generator[192277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:28:10 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 26 16:28:10 compute-0 sudo[192245]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:11 compute-0 sudo[192435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oknyvubqnaueqtacqdbnwmieqjwyijrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/AnsiballZ_stat.py'
Jan 26 16:28:11 compute-0 sudo[192435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:11 compute-0 python3.9[192437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:11 compute-0 sudo[192435]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:11 compute-0 sudo[192558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lljadcdvlxztinttqwomemvvogfoypho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/AnsiballZ_copy.py'
Jan 26 16:28:11 compute-0 sudo[192558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:11 compute-0 python3.9[192560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:11 compute-0 sudo[192558]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:12 compute-0 sudo[192656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjiewxciefmgvqtngjtrtidfvznhevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/AnsiballZ_stat.py'
Jan 26 16:28:12 compute-0 sudo[192656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:12 compute-0 podman[192584]: 2026-01-26 16:28:12.265068227 +0000 UTC m=+0.148838617 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:28:12 compute-0 python3.9[192662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:12 compute-0 sudo[192656]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:12 compute-0 sudo[192784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgahsecehcmtqjqskhtpqtvjzdqxnphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/AnsiballZ_copy.py'
Jan 26 16:28:12 compute-0 sudo[192784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:13 compute-0 python3.9[192786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444890.771527-387-238346773699457/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:13 compute-0 sudo[192784]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:14 compute-0 sudo[192936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojpckxzzvanlzbsevplnshcypfkvnqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444893.979481-419-88297844676224/AnsiballZ_file.py'
Jan 26 16:28:14 compute-0 sudo[192936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:14 compute-0 python3.9[192938]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:14 compute-0 sudo[192936]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:15 compute-0 sudo[193088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiolftrnwxrheiqmjfcbjiromeriddzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444895.020621-427-153336669404660/AnsiballZ_file.py'
Jan 26 16:28:15 compute-0 sudo[193088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:15 compute-0 python3.9[193090]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:15 compute-0 sudo[193088]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:16 compute-0 sudo[193242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjggaeeufjglmvarzffzkuixipagvygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444895.759391-435-178881324740812/AnsiballZ_stat.py'
Jan 26 16:28:16 compute-0 sudo[193242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:16 compute-0 python3.9[193244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:16 compute-0 sudo[193242]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:16 compute-0 sudo[193365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxzlyiuwkvbqncojcdyiqormfmypxoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444895.759391-435-178881324740812/AnsiballZ_copy.py'
Jan 26 16:28:16 compute-0 sudo[193365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:16 compute-0 python3.9[193367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444895.759391-435-178881324740812/.source.json _original_basename=.x_yn9_s7 follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:16 compute-0 sudo[193365]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:17 compute-0 sshd-session[193166]: Received disconnect from 45.249.247.124 port 44428:11:  [preauth]
Jan 26 16:28:17 compute-0 sshd-session[193166]: Disconnected from authenticating user root 45.249.247.124 port 44428 [preauth]
Jan 26 16:28:17 compute-0 python3.9[193518]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:21 compute-0 sudo[193939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvwimtfiplicniwcfffwrwdemxudigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444900.6638649-475-46505363100162/AnsiballZ_container_config_data.py'
Jan 26 16:28:21 compute-0 sudo[193939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:21 compute-0 python3.9[193941]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 26 16:28:21 compute-0 sudo[193939]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:22 compute-0 sudo[194091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqerurfitzmqnjkmltjlzzjztmoedryr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444901.881148-486-27165821192744/AnsiballZ_container_config_hash.py'
Jan 26 16:28:22 compute-0 sudo[194091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:22 compute-0 python3.9[194093]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:28:22 compute-0 sudo[194091]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:23 compute-0 sudo[194243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbxfacikfuroaixadjndbroszpmueuc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444903.0752807-496-88454011360463/AnsiballZ_edpm_container_manage.py'
Jan 26 16:28:23 compute-0 sudo[194243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:23 compute-0 python3[194245]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:28:24 compute-0 podman[194282]: 2026-01-26 16:28:24.11175107 +0000 UTC m=+0.059389655 container create 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:28:24 compute-0 podman[194282]: 2026-01-26 16:28:24.077612224 +0000 UTC m=+0.025250829 image pull 673eb625b19e37533ec15e219000c7d8233802c3ffa5adfdd7e8765ce31baf5c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 26 16:28:24 compute-0 python3[194245]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 26 16:28:24 compute-0 sudo[194243]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:24 compute-0 nova_compute[185389]: 2026-01-26 16:28:24.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:24 compute-0 nova_compute[185389]: 2026-01-26 16:28:24.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:24 compute-0 nova_compute[185389]: 2026-01-26 16:28:24.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:28:24 compute-0 nova_compute[185389]: 2026-01-26 16:28:24.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:28:25 compute-0 sudo[194469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxawdgiukqsrotcvtflczvuuknhhvplc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444904.7149577-504-10541891993952/AnsiballZ_stat.py'
Jan 26 16:28:25 compute-0 sudo[194469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:25 compute-0 python3.9[194471]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:25 compute-0 sudo[194469]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:25 compute-0 sudo[194623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzfguzlbgfszoopxjblaqsxmwrxmxgnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444905.5164473-513-84339823825041/AnsiballZ_file.py'
Jan 26 16:28:25 compute-0 sudo[194623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:26 compute-0 python3.9[194625]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:26 compute-0 sudo[194623]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.309 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.310 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.310 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.310 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.310 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.311 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.311 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.311 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.311 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:26 compute-0 sudo[194699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqeuedgbyqpwkzgltenomhtqafemgasn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444905.5164473-513-84339823825041/AnsiballZ_stat.py'
Jan 26 16:28:26 compute-0 sudo[194699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.393 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.393 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.394 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.394 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:28:26 compute-0 python3.9[194701]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:26 compute-0 sudo[194699]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.558 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.559 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6009MB free_disk=72.64990615844727GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.559 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.560 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.666 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.667 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.691 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.708 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.709 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:28:26 compute-0 nova_compute[185389]: 2026-01-26 16:28:26.709 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:27 compute-0 sudo[194850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ketgbdwgvouphxhoyprabwpxyqpqcbjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444906.589388-513-46529427807262/AnsiballZ_copy.py'
Jan 26 16:28:27 compute-0 sudo[194850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:27 compute-0 python3.9[194852]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444906.589388-513-46529427807262/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:27 compute-0 sudo[194850]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:27 compute-0 sudo[194926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvzpluwccffuddhwvxlqagaxwhvycyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444906.589388-513-46529427807262/AnsiballZ_systemd.py'
Jan 26 16:28:27 compute-0 sudo[194926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:28 compute-0 python3.9[194928]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:28:28 compute-0 systemd[1]: Reloading.
Jan 26 16:28:28 compute-0 systemd-rc-local-generator[194957]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:28:28 compute-0 systemd-sysv-generator[194960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:28:28 compute-0 sudo[194926]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:29 compute-0 sudo[195038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwvbpyelttvphisreqfywbvyzfuhfys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444906.589388-513-46529427807262/AnsiballZ_systemd.py'
Jan 26 16:28:29 compute-0 sudo[195038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:29 compute-0 python3.9[195040]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:28:29 compute-0 systemd[1]: Reloading.
Jan 26 16:28:29 compute-0 systemd-rc-local-generator[195065]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:28:29 compute-0 systemd-sysv-generator[195069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:28:29 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 26 16:28:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa5e1ddd78180225ebfe0eec07c8360ad067ae0126160882241631fdd57b86/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa5e1ddd78180225ebfe0eec07c8360ad067ae0126160882241631fdd57b86/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa5e1ddd78180225ebfe0eec07c8360ad067ae0126160882241631fdd57b86/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa5e1ddd78180225ebfe0eec07c8360ad067ae0126160882241631fdd57b86/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.
Jan 26 16:28:29 compute-0 podman[195080]: 2026-01-26 16:28:29.896459819 +0000 UTC m=+0.174862364 container init 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:28:29 compute-0 ceilometer_agent_compute[195095]: + sudo -E kolla_set_configs
Jan 26 16:28:29 compute-0 sudo[195101]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 26 16:28:29 compute-0 podman[195080]: 2026-01-26 16:28:29.929217339 +0000 UTC m=+0.207619894 container start 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120)
Jan 26 16:28:29 compute-0 ceilometer_agent_compute[195095]: sudo: unable to send audit message: Operation not permitted
Jan 26 16:28:29 compute-0 podman[195080]: ceilometer_agent_compute
Jan 26 16:28:29 compute-0 sudo[195101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:28:29 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 26 16:28:29 compute-0 sudo[195038]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Validating config file
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Copying service configuration files
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: INFO:__main__:Writing out command to execute
Jan 26 16:28:30 compute-0 sudo[195101]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: ++ cat /run_command
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + ARGS=
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + sudo kolla_copy_cacerts
Jan 26 16:28:30 compute-0 sudo[195123]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: sudo: unable to send audit message: Operation not permitted
Jan 26 16:28:30 compute-0 sudo[195123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:28:30 compute-0 podman[195102]: 2026-01-26 16:28:30.034849505 +0000 UTC m=+0.095010042 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true)
Jan 26 16:28:30 compute-0 sudo[195123]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:30 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:28:30 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Failed with result 'exit-code'.
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + [[ ! -n '' ]]
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + . kolla_extend_start
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + umask 0022
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.863 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.863 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.864 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.865 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.866 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.867 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.868 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.869 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.870 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.871 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.872 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.873 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.877 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.877 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.877 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.877 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.877 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.895 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.896 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.896 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.896 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.896 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.896 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.897 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.898 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.899 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.900 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.901 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.902 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.903 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.904 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.905 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.906 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.907 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.908 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.909 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.910 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.911 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.913 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.913 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.914 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.916 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 26 16:28:30 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:30.917 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 26 16:28:31 compute-0 python3.9[195284]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.162 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.171 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.172 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.172 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.295 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.295 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.296 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.297 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.298 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.299 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.300 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.301 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.302 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.303 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.304 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.306 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.307 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.308 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.309 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.309 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.309 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.309 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.311 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.328 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:28:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:28:32 compute-0 sudo[195439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoytonbcwikytpvldgzvonqvtxsufaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444911.5995305-558-217144677105751/AnsiballZ_stat.py'
Jan 26 16:28:32 compute-0 sudo[195439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:32 compute-0 python3.9[195441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:32 compute-0 sudo[195439]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:32 compute-0 sudo[195564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyejiyimsqffvecrvpgqjvvnkxhylwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444911.5995305-558-217144677105751/AnsiballZ_copy.py'
Jan 26 16:28:32 compute-0 sudo[195564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:33 compute-0 python3.9[195566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444911.5995305-558-217144677105751/.source.yaml _original_basename=.dag7jyhl follow=False checksum=5dc35a545c578c7b5b6f42333de77d736b8feebe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:33 compute-0 sudo[195564]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:33 compute-0 sudo[195716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxxhsitukglypmsoaidsxsdayssejxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444913.2877817-573-61575334432727/AnsiballZ_stat.py'
Jan 26 16:28:33 compute-0 sudo[195716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:33 compute-0 python3.9[195718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:33 compute-0 sudo[195716]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:34 compute-0 sudo[195839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latjsdxsckoxdawoqompjyldssvzhimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444913.2877817-573-61575334432727/AnsiballZ_copy.py'
Jan 26 16:28:34 compute-0 sudo[195839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:34 compute-0 python3.9[195841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444913.2877817-573-61575334432727/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:34 compute-0 sudo[195839]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:35 compute-0 sudo[195991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytbqaozoodzmuuqwduwfobjhajwvulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444914.8981428-594-217037676850016/AnsiballZ_file.py'
Jan 26 16:28:35 compute-0 sudo[195991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:35 compute-0 python3.9[195993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:35 compute-0 sudo[195991]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:36 compute-0 sudo[196143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugphxccirfrhrxakbqaydbbzrdgsnsfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444915.8679273-602-179834190738825/AnsiballZ_file.py'
Jan 26 16:28:36 compute-0 sudo[196143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:36 compute-0 python3.9[196145]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:36 compute-0 sudo[196143]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:37 compute-0 sudo[196295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcsjwnnzfasngbymgadjhuwqerkapgse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444916.5878322-610-152737704885684/AnsiballZ_stat.py'
Jan 26 16:28:37 compute-0 sudo[196295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:37 compute-0 python3.9[196297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:37 compute-0 sudo[196295]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:37 compute-0 rsyslogd[1006]: imjournal from <np0005595918:python3.9>: begin to drop messages due to rate-limiting
Jan 26 16:28:37 compute-0 sudo[196373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvxxrazmyoikmqxykbuawfthgbgyyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444916.5878322-610-152737704885684/AnsiballZ_file.py'
Jan 26 16:28:37 compute-0 sudo[196373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:37 compute-0 python3.9[196375]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.qb49a1qz recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:37 compute-0 sudo[196373]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:38 compute-0 podman[196499]: 2026-01-26 16:28:38.570307307 +0000 UTC m=+0.089079011 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:28:38 compute-0 python3.9[196538]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:41 compute-0 sudo[196966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydndfqrqslapprqwysicsxscavxgxwvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444920.9194584-647-244914024309229/AnsiballZ_container_config_data.py'
Jan 26 16:28:41 compute-0 sudo[196966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:41 compute-0 python3.9[196968]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 26 16:28:41 compute-0 sudo[196966]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:42 compute-0 sudo[197118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihebpdndavwfrhdbcvvuldnqzijrejze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444921.9353604-658-258018312505323/AnsiballZ_container_config_hash.py'
Jan 26 16:28:42 compute-0 sudo[197118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:42 compute-0 python3.9[197120]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:28:42 compute-0 sudo[197118]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:43 compute-0 sudo[197293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudonqhwvdemlyzzzudxxvgnsukosigw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444922.9064698-668-21476191731093/AnsiballZ_edpm_container_manage.py'
Jan 26 16:28:43 compute-0 sudo[197293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:43 compute-0 podman[197220]: 2026-01-26 16:28:43.235011494 +0000 UTC m=+0.110522148 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:28:43 compute-0 python3[197298]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:28:43 compute-0 podman[197331]: 2026-01-26 16:28:43.727815131 +0000 UTC m=+0.054472023 container create 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:28:43 compute-0 podman[197331]: 2026-01-26 16:28:43.694762005 +0000 UTC m=+0.021418917 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 26 16:28:43 compute-0 python3[197298]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 26 16:28:43 compute-0 sudo[197293]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:44 compute-0 sudo[197519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogyhndgrdwymxfcnwwhntzfzjagufwpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444924.1793718-676-58415296514700/AnsiballZ_stat.py'
Jan 26 16:28:44 compute-0 sudo[197519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:44 compute-0 python3.9[197521]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:44 compute-0 sudo[197519]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:45 compute-0 sudo[197673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfeordtobejnjuuqhvcsgsvwuwoyqwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444925.000077-685-192547213459844/AnsiballZ_file.py'
Jan 26 16:28:45 compute-0 sudo[197673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:45 compute-0 python3.9[197675]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:45 compute-0 sudo[197673]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:45 compute-0 sudo[197749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktruvktlvssdradxbgkalfuiegvvsjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444925.000077-685-192547213459844/AnsiballZ_stat.py'
Jan 26 16:28:45 compute-0 sudo[197749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:45 compute-0 python3.9[197751]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:28:46 compute-0 sudo[197749]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:46 compute-0 sudo[197900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcmkuegsneuxapjbklzneuvtesavkfob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444926.069165-685-165185905032893/AnsiballZ_copy.py'
Jan 26 16:28:46 compute-0 sudo[197900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:46 compute-0 python3.9[197902]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444926.069165-685-165185905032893/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:46 compute-0 sudo[197900]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:47 compute-0 sudo[197976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtchatxziggpqnmeixepshrhescgcrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444926.069165-685-165185905032893/AnsiballZ_systemd.py'
Jan 26 16:28:47 compute-0 sudo[197976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:47 compute-0 python3.9[197978]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:28:47 compute-0 systemd[1]: Reloading.
Jan 26 16:28:47 compute-0 systemd-sysv-generator[198007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:28:47 compute-0 systemd-rc-local-generator[198001]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:28:47 compute-0 sudo[197976]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:47 compute-0 sudo[198087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqcfiqarjnuzjozuzmmrkflhvmxflsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444926.069165-685-165185905032893/AnsiballZ_systemd.py'
Jan 26 16:28:47 compute-0 sudo[198087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:48 compute-0 python3.9[198089]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:28:48 compute-0 systemd[1]: Reloading.
Jan 26 16:28:48 compute-0 systemd-rc-local-generator[198120]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:28:48 compute-0 systemd-sysv-generator[198123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:28:48 compute-0 systemd[1]: Starting node_exporter container...
Jan 26 16:28:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95923a94b80ebce94e52d318e2270bfed5e3f9beed31cdd697b6834102d729b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95923a94b80ebce94e52d318e2270bfed5e3f9beed31cdd697b6834102d729b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.
Jan 26 16:28:48 compute-0 podman[198130]: 2026-01-26 16:28:48.763739572 +0000 UTC m=+0.125999523 container init 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.778Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.778Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.778Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.779Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=arp
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=bcache
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=bonding
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=cpu
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=edac
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=filefd
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=netclass
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=netdev
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=netstat
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=nfs
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=nvme
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=softnet
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=systemd
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=xfs
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.780Z caller=node_exporter.go:117 level=info collector=zfs
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.781Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 26 16:28:48 compute-0 node_exporter[198144]: ts=2026-01-26T16:28:48.781Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 26 16:28:48 compute-0 podman[198130]: 2026-01-26 16:28:48.790744537 +0000 UTC m=+0.153004458 container start 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:28:48 compute-0 podman[198130]: node_exporter
Jan 26 16:28:48 compute-0 systemd[1]: Started node_exporter container.
Jan 26 16:28:48 compute-0 sudo[198087]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:48 compute-0 podman[198154]: 2026-01-26 16:28:48.862821242 +0000 UTC m=+0.059264392 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:28:49 compute-0 python3.9[198324]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:28:50 compute-0 sudo[198474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmkrvjvdvdjxxtlzbutbvyaagmfnqto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444930.1391413-730-68994424623614/AnsiballZ_stat.py'
Jan 26 16:28:50 compute-0 sudo[198474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:50 compute-0 python3.9[198476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:50 compute-0 sudo[198474]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:51 compute-0 sudo[198599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snzijvegbuxpahjivlzgytfpjxodjtbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444930.1391413-730-68994424623614/AnsiballZ_copy.py'
Jan 26 16:28:51 compute-0 sudo[198599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:51 compute-0 python3.9[198601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444930.1391413-730-68994424623614/.source.yaml _original_basename=.1kbq2jzh follow=False checksum=a11d6ca19bd71d280405ac5e19057cafa3c715aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:51 compute-0 sudo[198599]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:51 compute-0 sudo[198751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unxpekzamahzmbnywzhzxfmjcvklijqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444931.5673506-745-156393240800171/AnsiballZ_stat.py'
Jan 26 16:28:51 compute-0 sudo[198751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:52 compute-0 python3.9[198753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:52 compute-0 sudo[198751]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:52 compute-0 sudo[198874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dstogccsxsnfomlcvphwwfovzsusgfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444931.5673506-745-156393240800171/AnsiballZ_copy.py'
Jan 26 16:28:52 compute-0 sudo[198874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:52 compute-0 python3.9[198876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444931.5673506-745-156393240800171/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:52 compute-0 sudo[198874]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:53 compute-0 sudo[199026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfrcpjcdfivmiqrzxkbfearcfjmvmpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444933.2448077-766-129251710807859/AnsiballZ_file.py'
Jan 26 16:28:53 compute-0 sudo[199026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:53 compute-0 python3.9[199028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:53 compute-0 sudo[199026]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:54 compute-0 sudo[199178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjkabbuphwhxvqpfudedfswegjyvwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444933.9965453-774-256386063829680/AnsiballZ_file.py'
Jan 26 16:28:54 compute-0 sudo[199178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:54 compute-0 python3.9[199180]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:28:54 compute-0 sudo[199178]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:55 compute-0 sudo[199330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hptilhulipfpgsgrqwqvlwwhebeusbxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444934.7699468-782-259388448066878/AnsiballZ_stat.py'
Jan 26 16:28:55 compute-0 sudo[199330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:55 compute-0 python3.9[199332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:28:55 compute-0 sudo[199330]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:55 compute-0 sudo[199408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmsqyqqiwprfvvptletvglzlflwrnmeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444934.7699468-782-259388448066878/AnsiballZ_file.py'
Jan 26 16:28:55 compute-0 sudo[199408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:55 compute-0 python3.9[199410]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.aj0o7fan recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:55 compute-0 sudo[199408]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:56 compute-0 python3.9[199560]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:28:58 compute-0 sudo[199981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stkyipiqiutbmzdyzohmxqyawoujucht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444938.0685937-819-83539552178754/AnsiballZ_container_config_data.py'
Jan 26 16:28:58 compute-0 sudo[199981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:58 compute-0 python3.9[199983]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 26 16:28:58 compute-0 sudo[199981]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:59 compute-0 sudo[200133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpqauhmewvmatkpgaolblfrmxowkaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444938.920293-830-242253880228146/AnsiballZ_container_config_hash.py'
Jan 26 16:28:59 compute-0 sudo[200133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:28:59 compute-0 python3.9[200135]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:28:59 compute-0 sudo[200133]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:00 compute-0 podman[200235]: 2026-01-26 16:29:00.160781186 +0000 UTC m=+0.054330750 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:29:00 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:29:00 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Failed with result 'exit-code'.
Jan 26 16:29:00 compute-0 sudo[200304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpshiqagsopncyimtzyoedwbsfjuwzb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444939.9202137-840-269210306537825/AnsiballZ_edpm_container_manage.py'
Jan 26 16:29:00 compute-0 sudo[200304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:00 compute-0 python3[200306]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:29:01.704 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:29:01.704 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:29:01.704 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:01 compute-0 podman[200320]: 2026-01-26 16:29:01.913004868 +0000 UTC m=+1.341522180 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 26 16:29:02 compute-0 podman[200419]: 2026-01-26 16:29:02.048774121 +0000 UTC m=+0.045869371 container create 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:29:02 compute-0 podman[200419]: 2026-01-26 16:29:02.026748181 +0000 UTC m=+0.023843451 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 26 16:29:02 compute-0 python3[200306]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 26 16:29:02 compute-0 sudo[200304]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:02 compute-0 sudo[200607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsdduhyecojnxoknttmmfnealcfqnisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444942.3661497-848-57034522837197/AnsiballZ_stat.py'
Jan 26 16:29:02 compute-0 sudo[200607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:02 compute-0 python3.9[200609]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:29:02 compute-0 sudo[200607]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:03 compute-0 sudo[200761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sedocjqcubdqqftyccszirjtgqwkwfzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444943.1854084-857-26181216500739/AnsiballZ_file.py'
Jan 26 16:29:03 compute-0 sudo[200761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:03 compute-0 python3.9[200763]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:03 compute-0 sudo[200761]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:03 compute-0 sudo[200837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhdgenydkgabhwjevsilrvpornxzcqrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444943.1854084-857-26181216500739/AnsiballZ_stat.py'
Jan 26 16:29:03 compute-0 sudo[200837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:04 compute-0 python3.9[200839]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:29:04 compute-0 sudo[200837]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:04 compute-0 sudo[200988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vobytqvyuuwdxaikbsxmzrbpahybvbzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444944.164104-857-46863839103660/AnsiballZ_copy.py'
Jan 26 16:29:04 compute-0 sudo[200988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:04 compute-0 python3.9[200990]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444944.164104-857-46863839103660/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:04 compute-0 sudo[200988]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:05 compute-0 sudo[201064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxalzwphxkvncormiyrqyfwanxwpyfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444944.164104-857-46863839103660/AnsiballZ_systemd.py'
Jan 26 16:29:05 compute-0 sudo[201064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:05 compute-0 python3.9[201066]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:29:05 compute-0 systemd[1]: Reloading.
Jan 26 16:29:05 compute-0 systemd-rc-local-generator[201091]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:29:05 compute-0 systemd-sysv-generator[201094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:29:05 compute-0 sudo[201064]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:06 compute-0 sudo[201175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmwnyofplfoienjhkytoqxtgenjthqbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444944.164104-857-46863839103660/AnsiballZ_systemd.py'
Jan 26 16:29:06 compute-0 sudo[201175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:06 compute-0 python3.9[201177]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:29:06 compute-0 systemd[1]: Reloading.
Jan 26 16:29:06 compute-0 systemd-rc-local-generator[201205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:29:06 compute-0 systemd-sysv-generator[201210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:29:06 compute-0 systemd[1]: Starting podman_exporter container...
Jan 26 16:29:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27f453cfcc785ff799d73dfe4ee4afc3e9c4d01f624b38c71e0727080a3d370/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27f453cfcc785ff799d73dfe4ee4afc3e9c4d01f624b38c71e0727080a3d370/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.
Jan 26 16:29:06 compute-0 podman[201217]: 2026-01-26 16:29:06.948560499 +0000 UTC m=+0.151384305 container init 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:29:06 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:06.974Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 26 16:29:06 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:06.974Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 26 16:29:06 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:06.974Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 26 16:29:06 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:06.974Z caller=handler.go:105 level=info collector=container
Jan 26 16:29:06 compute-0 podman[201217]: 2026-01-26 16:29:06.989078126 +0000 UTC m=+0.191901932 container start 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:29:06 compute-0 podman[201217]: podman_exporter
Jan 26 16:29:07 compute-0 systemd[1]: Starting Podman API Service...
Jan 26 16:29:07 compute-0 systemd[1]: Started Podman API Service.
Jan 26 16:29:07 compute-0 systemd[1]: Started podman_exporter container.
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="Setting parallel job count to 25"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="Using sqlite as database backend"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 26 16:29:07 compute-0 podman[201244]: @ - - [26/Jan/2026:16:29:07 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 26 16:29:07 compute-0 podman[201244]: time="2026-01-26T16:29:07Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:29:07 compute-0 sudo[201175]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:07 compute-0 podman[201244]: @ - - [26/Jan/2026:16:29:07 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 26 16:29:07 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:07.084Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 26 16:29:07 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:07.084Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 26 16:29:07 compute-0 podman_exporter[201233]: ts=2026-01-26T16:29:07.084Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 26 16:29:07 compute-0 podman[201242]: 2026-01-26 16:29:07.085010031 +0000 UTC m=+0.082118885 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:29:07 compute-0 systemd[1]: 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64-2061995da8628064.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:29:07 compute-0 systemd[1]: 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64-2061995da8628064.service: Failed with result 'exit-code'.
Jan 26 16:29:07 compute-0 auditd[704]: Audit daemon rotating log files
Jan 26 16:29:07 compute-0 python3.9[201429]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:29:08 compute-0 sudo[201579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqnpeyroteoownyozqnbqqpyyojzncfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444948.2203956-902-197490260262213/AnsiballZ_stat.py'
Jan 26 16:29:08 compute-0 sudo[201579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:08 compute-0 python3.9[201581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:08 compute-0 sudo[201579]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:08 compute-0 podman[201582]: 2026-01-26 16:29:08.796452168 +0000 UTC m=+0.047026863 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 16:29:09 compute-0 sudo[201724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqurgiwdannqanyqepthwmyvhcppqfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444948.2203956-902-197490260262213/AnsiballZ_copy.py'
Jan 26 16:29:09 compute-0 sudo[201724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:09 compute-0 python3.9[201726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444948.2203956-902-197490260262213/.source.yaml _original_basename=.8pnaqg24 follow=False checksum=5e125d2d393d6cd0925701d568f893d33889e7bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:09 compute-0 sudo[201724]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:09 compute-0 sudo[201876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trdkxavbuuupldwxtbnkhuukbxqqxnkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444949.6339376-917-166044309145675/AnsiballZ_stat.py'
Jan 26 16:29:09 compute-0 sudo[201876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:10 compute-0 python3.9[201878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:10 compute-0 sudo[201876]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:10 compute-0 sudo[201999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyarwasinukgafvtnsdffnwaxsabftym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444949.6339376-917-166044309145675/AnsiballZ_copy.py'
Jan 26 16:29:10 compute-0 sudo[201999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:10 compute-0 python3.9[202001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769444949.6339376-917-166044309145675/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:29:10 compute-0 sudo[201999]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:11 compute-0 sudo[202151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isgkxwdjmvkjhacemmeqmafunfwftfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444951.2909331-938-115342747463021/AnsiballZ_file.py'
Jan 26 16:29:11 compute-0 sudo[202151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:11 compute-0 python3.9[202153]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:11 compute-0 sudo[202151]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:12 compute-0 sudo[202303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyafxvrcbazvfvhsgdwxawjgbrralqdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444952.2056522-946-134790384292993/AnsiballZ_file.py'
Jan 26 16:29:12 compute-0 sudo[202303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:12 compute-0 python3.9[202305]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:29:12 compute-0 sudo[202303]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:13 compute-0 sudo[202455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvcdsjiatxhqkgcsyvrjhzqalzwceml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444952.9292572-954-91021445615072/AnsiballZ_stat.py'
Jan 26 16:29:13 compute-0 sudo[202455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:13 compute-0 python3.9[202457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:13 compute-0 sudo[202455]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:13 compute-0 sudo[202546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyrwlivsbjmorlfgkjadqbxgttociqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444952.9292572-954-91021445615072/AnsiballZ_file.py'
Jan 26 16:29:13 compute-0 sudo[202546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:13 compute-0 podman[202507]: 2026-01-26 16:29:13.773886049 +0000 UTC m=+0.082965558 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:29:13 compute-0 python3.9[202554]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.vh79udp6 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:13 compute-0 sudo[202546]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:14 compute-0 python3.9[202711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:16 compute-0 sudo[203132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpjbhgjmvkegzgdgjrofatmqjkeevnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444956.322885-991-161416581121030/AnsiballZ_container_config_data.py'
Jan 26 16:29:16 compute-0 sudo[203132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:16 compute-0 python3.9[203134]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 26 16:29:16 compute-0 sudo[203132]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:17 compute-0 sudo[203284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjaoykesmenwerxuevvpvazarzhfdmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444957.2112648-1002-118861809526881/AnsiballZ_container_config_hash.py'
Jan 26 16:29:17 compute-0 sudo[203284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:17 compute-0 python3.9[203286]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:29:17 compute-0 sudo[203284]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:18 compute-0 sudo[203436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnupwscqwwiswesecolorlarjblvzka ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769444958.153418-1012-84435346707184/AnsiballZ_edpm_container_manage.py'
Jan 26 16:29:18 compute-0 sudo[203436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:18 compute-0 python3[203438]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:29:19 compute-0 podman[203465]: 2026-01-26 16:29:19.174989132 +0000 UTC m=+0.066433415 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:29:21 compute-0 podman[203452]: 2026-01-26 16:29:21.916285992 +0000 UTC m=+3.122434341 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 26 16:29:22 compute-0 podman[203573]: 2026-01-26 16:29:22.040607789 +0000 UTC m=+0.040827367 container create 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Jan 26 16:29:22 compute-0 podman[203573]: 2026-01-26 16:29:22.020727996 +0000 UTC m=+0.020947594 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 26 16:29:22 compute-0 python3[203438]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 26 16:29:22 compute-0 sudo[203436]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:22 compute-0 sudo[203761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ercfwumtpwnyhjoxdanudbpxvngugnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444962.3606877-1020-137644213130134/AnsiballZ_stat.py'
Jan 26 16:29:22 compute-0 sudo[203761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:23 compute-0 python3.9[203763]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:29:23 compute-0 sudo[203761]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:23 compute-0 sudo[203915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bynoopwzqbtbjyrkbrpcguqeirunvrpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444963.3087895-1029-31946212961602/AnsiballZ_file.py'
Jan 26 16:29:23 compute-0 sudo[203915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:23 compute-0 python3.9[203917]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:23 compute-0 sudo[203915]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:24 compute-0 sudo[203991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgpwsevzbpujlwuaemcpchkmwldywla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444963.3087895-1029-31946212961602/AnsiballZ_stat.py'
Jan 26 16:29:24 compute-0 sudo[203991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:24 compute-0 python3.9[203993]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:29:24 compute-0 sudo[203991]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:24 compute-0 sudo[204142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfwytnumqzqmymmgtmrlnjipbglrjfvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444964.3409753-1029-134524801214492/AnsiballZ_copy.py'
Jan 26 16:29:24 compute-0 sudo[204142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:24 compute-0 python3.9[204144]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769444964.3409753-1029-134524801214492/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:25 compute-0 sudo[204142]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:25 compute-0 sudo[204218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwoxnafznolkimuoyqoccynusjnsgtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444964.3409753-1029-134524801214492/AnsiballZ_systemd.py'
Jan 26 16:29:25 compute-0 sudo[204218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:25 compute-0 python3.9[204220]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:29:25 compute-0 systemd[1]: Reloading.
Jan 26 16:29:25 compute-0 systemd-rc-local-generator[204244]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:29:25 compute-0 systemd-sysv-generator[204248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:29:25 compute-0 sudo[204218]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:26 compute-0 sudo[204329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggoxbkofpiwjtewdujssdamossddsnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444964.3409753-1029-134524801214492/AnsiballZ_systemd.py'
Jan 26 16:29:26 compute-0 sudo[204329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:26 compute-0 python3.9[204331]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:29:26 compute-0 systemd[1]: Reloading.
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.702 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.704 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.723 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.724 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.737 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.737 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.738 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.739 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.739 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.739 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.739 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.740 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.741 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.767 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.769 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.770 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.770 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:29:26 compute-0 systemd-sysv-generator[204366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:29:26 compute-0 systemd-rc-local-generator[204362]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.949 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.950 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5822MB free_disk=72.4545669555664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.950 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:26 compute-0 nova_compute[185389]: 2026-01-26 16:29:26.951 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:27 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.107 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.108 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.139 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.153 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.155 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:29:27 compute-0 nova_compute[185389]: 2026-01-26 16:29:27.156 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ed0424471801cf14ac5fd1f79019e68c419b305e9a7019634295325cc94e26/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ed0424471801cf14ac5fd1f79019e68c419b305e9a7019634295325cc94e26/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ed0424471801cf14ac5fd1f79019e68c419b305e9a7019634295325cc94e26/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.
Jan 26 16:29:27 compute-0 podman[204371]: 2026-01-26 16:29:27.222880522 +0000 UTC m=+0.177118892 container init 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350)
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *bridge.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *coverage.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *datapath.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *iface.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *memory.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *ovn.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *pmd_perf.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *pmd_rxq.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: INFO    16:29:27 main.go:48: registering *vswitch.Collector
Jan 26 16:29:27 compute-0 openstack_network_exporter[204387]: NOTICE  16:29:27 main.go:76: listening on https://:9105/metrics
Jan 26 16:29:27 compute-0 podman[204371]: 2026-01-26 16:29:27.268477675 +0000 UTC m=+0.222716045 container start 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 26 16:29:27 compute-0 podman[204371]: openstack_network_exporter
Jan 26 16:29:27 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 26 16:29:27 compute-0 sudo[204329]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:27 compute-0 podman[204397]: 2026-01-26 16:29:27.398282476 +0000 UTC m=+0.117143739 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Jan 26 16:29:28 compute-0 python3.9[204569]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:29:28 compute-0 sudo[204719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bttimvrumtblxlwlwjpivvjnnxbfnooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444968.6457965-1074-57483981258410/AnsiballZ_stat.py'
Jan 26 16:29:28 compute-0 sudo[204719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:29 compute-0 python3.9[204721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:29 compute-0 sudo[204719]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:29 compute-0 sudo[204844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmkvdztyvnlqxqpjfbhpxepkefjjufc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444968.6457965-1074-57483981258410/AnsiballZ_copy.py'
Jan 26 16:29:29 compute-0 sudo[204844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:29 compute-0 python3.9[204846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444968.6457965-1074-57483981258410/.source.yaml _original_basename=.8ozr9gjo follow=False checksum=1b845b328c1efd49f477cd2ed510d8a64e81f62b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:29 compute-0 sudo[204844]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:30 compute-0 sudo[205009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvwwvztxtmhhylfytvhglkamdaisgbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444970.0140378-1089-9516062057763/AnsiballZ_find.py'
Jan 26 16:29:30 compute-0 sudo[205009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:30 compute-0 podman[204970]: 2026-01-26 16:29:30.45606428 +0000 UTC m=+0.069136802 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 16:29:30 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:29:30 compute-0 systemd[1]: 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0-42908931ce18425b.service: Failed with result 'exit-code'.
Jan 26 16:29:30 compute-0 python3.9[205017]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:29:30 compute-0 sudo[205009]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:31 compute-0 sudo[205168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruiuzwilwpgxeogsqbxgfqiccfkzqtta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444971.0657942-1099-274115788625856/AnsiballZ_podman_container_info.py'
Jan 26 16:29:31 compute-0 sudo[205168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:31 compute-0 python3.9[205170]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 26 16:29:31 compute-0 sudo[205168]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:32 compute-0 sudo[205333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuckvbxyvjfkpkasirxsgczaahzxwqxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444972.0945354-1107-86666425351618/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:32 compute-0 sudo[205333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:32 compute-0 python3.9[205335]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:32 compute-0 systemd[1]: Started libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope.
Jan 26 16:29:32 compute-0 podman[205336]: 2026-01-26 16:29:32.932761196 +0000 UTC m=+0.109957784 container exec 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:32 compute-0 podman[205336]: 2026-01-26 16:29:32.938852091 +0000 UTC m=+0.116048649 container exec_died 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:29:32 compute-0 sudo[205333]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:32 compute-0 systemd[1]: libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope: Deactivated successfully.
Jan 26 16:29:33 compute-0 sudo[205517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwubuemhognhxqgpgjkqbokxxzrywlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444973.1285646-1115-199706197917239/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:33 compute-0 sudo[205517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:33 compute-0 python3.9[205519]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:33 compute-0 systemd[1]: Started libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope.
Jan 26 16:29:33 compute-0 podman[205520]: 2026-01-26 16:29:33.76588838 +0000 UTC m=+0.111305911 container exec 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:29:33 compute-0 podman[205520]: 2026-01-26 16:29:33.799389366 +0000 UTC m=+0.144806877 container exec_died 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:29:33 compute-0 systemd[1]: libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope: Deactivated successfully.
Jan 26 16:29:33 compute-0 sudo[205517]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:34 compute-0 sudo[205699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwfhaxecwterpecfrtadxxypdstsupdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444974.048519-1123-185322095744627/AnsiballZ_file.py'
Jan 26 16:29:34 compute-0 sudo[205699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:34 compute-0 python3.9[205701]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:34 compute-0 sudo[205699]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:35 compute-0 sudo[205851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdujfwdwunjjydwmmtiybbjyhxpbrtjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444974.9642317-1132-8886691924929/AnsiballZ_podman_container_info.py'
Jan 26 16:29:35 compute-0 sudo[205851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:35 compute-0 python3.9[205853]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 26 16:29:35 compute-0 sudo[205851]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:36 compute-0 sudo[206016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unsazlsqwdqbzzmjhcliisdyilnrvyyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444975.7980866-1140-265704831392275/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:36 compute-0 sudo[206016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:36 compute-0 python3.9[206018]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:36 compute-0 systemd[1]: Started libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope.
Jan 26 16:29:36 compute-0 podman[206019]: 2026-01-26 16:29:36.410324653 +0000 UTC m=+0.083022485 container exec 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 16:29:36 compute-0 podman[206019]: 2026-01-26 16:29:36.440703045 +0000 UTC m=+0.113400847 container exec_died 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:29:36 compute-0 systemd[1]: libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope: Deactivated successfully.
Jan 26 16:29:36 compute-0 sudo[206016]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:36 compute-0 sudo[206202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmvsizuzqxaefdtjmpglxctxiwoyisfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444976.6549869-1148-113469911970657/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:37 compute-0 sudo[206202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:37 compute-0 python3.9[206204]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:37 compute-0 systemd[1]: Started libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope.
Jan 26 16:29:37 compute-0 podman[206205]: 2026-01-26 16:29:37.343187975 +0000 UTC m=+0.073605572 container exec 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 26 16:29:37 compute-0 podman[206205]: 2026-01-26 16:29:37.353493314 +0000 UTC m=+0.083910881 container exec_died 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:29:37 compute-0 sudo[206202]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:37 compute-0 systemd[1]: libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope: Deactivated successfully.
Jan 26 16:29:37 compute-0 podman[206224]: 2026-01-26 16:29:37.426823777 +0000 UTC m=+0.078820923 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:29:37 compute-0 sudo[206412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqszmyxavrudabqwnnhbnbfzkbzjdpwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444977.593905-1156-152313639010056/AnsiballZ_file.py'
Jan 26 16:29:37 compute-0 sudo[206412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:38 compute-0 python3.9[206414]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:38 compute-0 sudo[206412]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:38 compute-0 sudo[206574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzfnyvbbnvkzpbvnhrcmxwuupcbxibdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444978.6427028-1165-249741047794577/AnsiballZ_podman_container_info.py'
Jan 26 16:29:39 compute-0 sudo[206574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:39 compute-0 podman[206538]: 2026-01-26 16:29:39.011864908 +0000 UTC m=+0.076344116 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:29:39 compute-0 python3.9[206582]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 26 16:29:39 compute-0 sudo[206574]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:39 compute-0 sudo[206745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apkpgrjzbeyslrvjhqmgisfstlsiecdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444979.4652221-1173-158707243868609/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:39 compute-0 sudo[206745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:39 compute-0 python3.9[206747]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:40 compute-0 systemd[1]: Started libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope.
Jan 26 16:29:40 compute-0 podman[206748]: 2026-01-26 16:29:40.095174208 +0000 UTC m=+0.100166190 container exec 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 26 16:29:40 compute-0 podman[206748]: 2026-01-26 16:29:40.13145901 +0000 UTC m=+0.136450972 container exec_died 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS)
Jan 26 16:29:40 compute-0 systemd[1]: libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope: Deactivated successfully.
Jan 26 16:29:40 compute-0 sudo[206745]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:40 compute-0 sudo[206931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirldfdtsntbdmpoyuqnrauzpgvypeks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444980.3642395-1181-265970930830074/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:40 compute-0 sudo[206931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:40 compute-0 python3.9[206933]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:40 compute-0 systemd[1]: Started libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope.
Jan 26 16:29:41 compute-0 podman[206934]: 2026-01-26 16:29:41.006821885 +0000 UTC m=+0.098414183 container exec 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 16:29:41 compute-0 podman[206934]: 2026-01-26 16:29:41.042344866 +0000 UTC m=+0.133937154 container exec_died 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 26 16:29:41 compute-0 systemd[1]: libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope: Deactivated successfully.
Jan 26 16:29:41 compute-0 sudo[206931]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:41 compute-0 sudo[207116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uobuslqhcpetmgoggkoxxchoeifcgzwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444981.2886727-1189-209354117063783/AnsiballZ_file.py'
Jan 26 16:29:41 compute-0 sudo[207116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:41 compute-0 python3.9[207118]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:41 compute-0 sudo[207116]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:42 compute-0 sudo[207268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opcmnyzhbtuhoysiupsnuwnphgwwlfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444982.3150618-1198-15775141114009/AnsiballZ_podman_container_info.py'
Jan 26 16:29:42 compute-0 sudo[207268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:42 compute-0 python3.9[207270]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 26 16:29:42 compute-0 sudo[207268]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:43 compute-0 sudo[207433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmadznkupqcphruzibdtnerofgnirof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444983.2692413-1206-88551206639238/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:43 compute-0 sudo[207433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:43 compute-0 python3.9[207435]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:43 compute-0 systemd[1]: Started libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope.
Jan 26 16:29:43 compute-0 podman[207436]: 2026-01-26 16:29:43.862530384 +0000 UTC m=+0.082499103 container exec 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:29:43 compute-0 podman[207457]: 2026-01-26 16:29:43.929146785 +0000 UTC m=+0.054449244 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:29:43 compute-0 podman[207436]: 2026-01-26 16:29:43.934378516 +0000 UTC m=+0.154347225 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:29:43 compute-0 systemd[1]: libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope: Deactivated successfully.
Jan 26 16:29:43 compute-0 podman[207454]: 2026-01-26 16:29:43.964787799 +0000 UTC m=+0.092283207 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:29:43 compute-0 sudo[207433]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:44 compute-0 sudo[207645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzwakqvsyulvzkxcepqmjuzknafhohza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444984.153116-1214-110454704973531/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:44 compute-0 sudo[207645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:44 compute-0 python3.9[207647]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:44 compute-0 systemd[1]: Started libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope.
Jan 26 16:29:44 compute-0 podman[207648]: 2026-01-26 16:29:44.74680566 +0000 UTC m=+0.076349116 container exec 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:29:44 compute-0 podman[207667]: 2026-01-26 16:29:44.812174059 +0000 UTC m=+0.054313471 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:29:44 compute-0 podman[207648]: 2026-01-26 16:29:44.81850832 +0000 UTC m=+0.148051766 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:29:44 compute-0 systemd[1]: libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope: Deactivated successfully.
Jan 26 16:29:44 compute-0 sudo[207645]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:45 compute-0 sudo[207829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvsvnvvemhngfpqwhdkzsmevadpyvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444985.0113971-1222-52374828239969/AnsiballZ_file.py'
Jan 26 16:29:45 compute-0 sudo[207829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:45 compute-0 python3.9[207831]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:45 compute-0 sudo[207829]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:46 compute-0 sudo[207981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlelyqnajgojyigbdvypbjuphkczcemq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444985.7505078-1231-184708672373078/AnsiballZ_podman_container_info.py'
Jan 26 16:29:46 compute-0 sudo[207981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:46 compute-0 python3.9[207983]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 26 16:29:46 compute-0 sudo[207981]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:46 compute-0 sudo[208147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geabwciipjcjnwcbxryrlftipozplvsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444986.5488307-1239-104334802208668/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:46 compute-0 sudo[208147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:47 compute-0 python3.9[208149]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:47 compute-0 systemd[1]: Started libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope.
Jan 26 16:29:47 compute-0 podman[208150]: 2026-01-26 16:29:47.188701116 +0000 UTC m=+0.097150049 container exec 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:29:47 compute-0 podman[208150]: 2026-01-26 16:29:47.223386384 +0000 UTC m=+0.131835287 container exec_died 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:29:47 compute-0 systemd[1]: libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope: Deactivated successfully.
Jan 26 16:29:47 compute-0 sudo[208147]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:47 compute-0 sudo[208332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yynkkunymbfdldotobsdxquhcbyutyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444987.449894-1247-129560908899184/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:47 compute-0 sudo[208332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:47 compute-0 python3.9[208334]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:48 compute-0 systemd[1]: Started libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope.
Jan 26 16:29:48 compute-0 podman[208335]: 2026-01-26 16:29:48.038401388 +0000 UTC m=+0.076824399 container exec 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:29:48 compute-0 podman[208335]: 2026-01-26 16:29:48.071513494 +0000 UTC m=+0.109936525 container exec_died 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:29:48 compute-0 systemd[1]: libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope: Deactivated successfully.
Jan 26 16:29:48 compute-0 sudo[208332]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:48 compute-0 sudo[208516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmjbbxlsnqgwybeewnczxonivqybyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444988.3316631-1255-256440610933075/AnsiballZ_file.py'
Jan 26 16:29:48 compute-0 sudo[208516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:48 compute-0 python3.9[208518]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:48 compute-0 sudo[208516]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:49 compute-0 sudo[208685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egovztlalsyylzfygzrxdovemukoukmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444989.0611322-1264-238153009760441/AnsiballZ_podman_container_info.py'
Jan 26 16:29:49 compute-0 podman[208642]: 2026-01-26 16:29:49.389439659 +0000 UTC m=+0.062879101 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:29:49 compute-0 sudo[208685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:49 compute-0 python3.9[208694]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 26 16:29:49 compute-0 sudo[208685]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:50 compute-0 sudo[208858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqgeqhjiqghghfzzjclgmywsmkwxxgtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444989.8758957-1272-70115529344502/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:50 compute-0 sudo[208858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:50 compute-0 python3.9[208860]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:50 compute-0 systemd[1]: Started libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope.
Jan 26 16:29:50 compute-0 podman[208861]: 2026-01-26 16:29:50.459707287 +0000 UTC m=+0.069944262 container exec 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Jan 26 16:29:50 compute-0 podman[208861]: 2026-01-26 16:29:50.492300108 +0000 UTC m=+0.102537083 container exec_died 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Jan 26 16:29:50 compute-0 systemd[1]: libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope: Deactivated successfully.
Jan 26 16:29:50 compute-0 sudo[208858]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:50 compute-0 sudo[209040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnrodfgohxpimqqxtbnfupxftctgwvvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444990.6703994-1280-148355389689933/AnsiballZ_podman_container_exec.py'
Jan 26 16:29:50 compute-0 sudo[209040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:51 compute-0 python3.9[209042]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:29:51 compute-0 systemd[1]: Started libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope.
Jan 26 16:29:51 compute-0 podman[209043]: 2026-01-26 16:29:51.283345254 +0000 UTC m=+0.072115702 container exec 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 26 16:29:51 compute-0 podman[209043]: 2026-01-26 16:29:51.313328054 +0000 UTC m=+0.102098492 container exec_died 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Jan 26 16:29:51 compute-0 systemd[1]: libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope: Deactivated successfully.
Jan 26 16:29:51 compute-0 sudo[209040]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:51 compute-0 sudo[209223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntscfqdcarwqxlumzqguzubbeycinprb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444991.5302143-1288-237164075354998/AnsiballZ_file.py'
Jan 26 16:29:51 compute-0 sudo[209223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:52 compute-0 python3.9[209225]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:52 compute-0 sudo[209223]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:52 compute-0 sudo[209375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enaobnoknwaoofnbpejmcjouffapskrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444992.289093-1297-7078594461308/AnsiballZ_file.py'
Jan 26 16:29:52 compute-0 sudo[209375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:52 compute-0 python3.9[209377]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:52 compute-0 sudo[209375]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:53 compute-0 sudo[209527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyswglhdplapffbjnpftjxowulnodhnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444993.172448-1305-63150148422090/AnsiballZ_stat.py'
Jan 26 16:29:53 compute-0 sudo[209527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:53 compute-0 python3.9[209529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:53 compute-0 sudo[209527]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:53 compute-0 sudo[209650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpqfcdepffvsxzkysscmucwzjygengt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444993.172448-1305-63150148422090/AnsiballZ_copy.py'
Jan 26 16:29:53 compute-0 sudo[209650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:54 compute-0 python3.9[209652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769444993.172448-1305-63150148422090/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:54 compute-0 sudo[209650]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:54 compute-0 sudo[209802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khuitkerzullnhaaszcladiuvnwsezmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444994.4631338-1321-120074327033740/AnsiballZ_file.py'
Jan 26 16:29:54 compute-0 sudo[209802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:54 compute-0 python3.9[209804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:55 compute-0 sudo[209802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:55 compute-0 sudo[209954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbowjopvmgaxwbwyduvezrwutwjocla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444995.1922317-1329-233202074408793/AnsiballZ_stat.py'
Jan 26 16:29:55 compute-0 sudo[209954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:56 compute-0 python3.9[209956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:56 compute-0 sudo[209954]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:56 compute-0 sudo[210032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyuguzqhxzcanawqhjxttgmdshsufvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444995.1922317-1329-233202074408793/AnsiballZ_file.py'
Jan 26 16:29:56 compute-0 sudo[210032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:56 compute-0 python3.9[210034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:56 compute-0 sudo[210032]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:57 compute-0 sudo[210184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxyhfgkmzylwhhteyuorjucwupmnrhys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444996.9476058-1341-158538674350295/AnsiballZ_stat.py'
Jan 26 16:29:57 compute-0 sudo[210184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:57 compute-0 podman[210186]: 2026-01-26 16:29:57.561600702 +0000 UTC m=+0.065806432 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 16:29:57 compute-0 python3.9[210187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:57 compute-0 sudo[210184]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:58 compute-0 sudo[210284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkjseurlwkfmnczvowjaqwimhvrydmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444996.9476058-1341-158538674350295/AnsiballZ_file.py'
Jan 26 16:29:58 compute-0 sudo[210284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:58 compute-0 python3.9[210286]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.h1w3b1ia recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:58 compute-0 sudo[210284]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:58 compute-0 sudo[210436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdagmfumqsfkgrrtcyhgjoegkwhmaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444998.477-1353-238914457960174/AnsiballZ_stat.py'
Jan 26 16:29:58 compute-0 sudo[210436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:59 compute-0 python3.9[210438]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:29:59 compute-0 sudo[210436]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:59 compute-0 sudo[210514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiohfkyudrouytqrpepcbqkjqbnnvcyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444998.477-1353-238914457960174/AnsiballZ_file.py'
Jan 26 16:29:59 compute-0 sudo[210514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:29:59 compute-0 python3.9[210516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:29:59 compute-0 sudo[210514]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:00 compute-0 sudo[210666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvqsptutmplhzstajcxnjixwnenfqsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769444999.7811046-1366-104167864789881/AnsiballZ_command.py'
Jan 26 16:30:00 compute-0 sudo[210666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:00 compute-0 python3.9[210668]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:00 compute-0 sudo[210666]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:01 compute-0 podman[210753]: 2026-01-26 16:30:01.171411055 +0000 UTC m=+0.059946732 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 26 16:30:01 compute-0 sudo[210839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqmbzkcfmfltdhxjofoxtbjneumuwpy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445000.5567052-1374-72750731163706/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 16:30:01 compute-0 sudo[210839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:01 compute-0 python3[210841]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 16:30:01 compute-0 sudo[210839]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:30:01.705 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:30:01.706 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:30:01.706 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:02 compute-0 sudo[210991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdefcafjjflqxkgmddyppsvdgqytvmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445001.687873-1382-267405638185403/AnsiballZ_stat.py'
Jan 26 16:30:02 compute-0 sudo[210991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:02 compute-0 python3.9[210993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:02 compute-0 sudo[210991]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:02 compute-0 sudo[211069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fopdhisgltwibymcqpwexqbxkvlsktht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445001.687873-1382-267405638185403/AnsiballZ_file.py'
Jan 26 16:30:02 compute-0 sudo[211069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:02 compute-0 python3.9[211071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:02 compute-0 sudo[211069]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:03 compute-0 sudo[211221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvbwqqwtxajdugfobyxqlfjfjryrmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445002.9928682-1394-130336584601425/AnsiballZ_stat.py'
Jan 26 16:30:03 compute-0 sudo[211221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:03 compute-0 python3.9[211223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:03 compute-0 sudo[211221]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:03 compute-0 sudo[211299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmwfhktahnrjgqroymgjfmhvikmrrif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445002.9928682-1394-130336584601425/AnsiballZ_file.py'
Jan 26 16:30:03 compute-0 sudo[211299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:04 compute-0 python3.9[211301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:04 compute-0 sudo[211299]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:04 compute-0 sudo[211451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcamvodwhxgwxndrjpktkoozotyzppx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445004.5111117-1406-25951047816398/AnsiballZ_stat.py'
Jan 26 16:30:04 compute-0 sudo[211451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:05 compute-0 python3.9[211453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:05 compute-0 sudo[211451]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:05 compute-0 sudo[211529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcuucohbbcpglsknxwcictbnknnotjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445004.5111117-1406-25951047816398/AnsiballZ_file.py'
Jan 26 16:30:05 compute-0 sudo[211529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:05 compute-0 python3.9[211531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:05 compute-0 sudo[211529]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:06 compute-0 sudo[211681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xufulzkzbiapcxavlkvfahzxslkgpetv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445005.6901584-1418-5198049208122/AnsiballZ_stat.py'
Jan 26 16:30:06 compute-0 sudo[211681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:06 compute-0 python3.9[211683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:06 compute-0 sudo[211681]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:06 compute-0 sudo[211759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfruywnuckjlmhojknxddfkuqldfemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445005.6901584-1418-5198049208122/AnsiballZ_file.py'
Jan 26 16:30:06 compute-0 sudo[211759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:06 compute-0 python3.9[211761]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:06 compute-0 sudo[211759]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:07 compute-0 sudo[211911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobanxohixkynwuytveulwqnhwktdeah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445006.9508173-1430-113832372398164/AnsiballZ_stat.py'
Jan 26 16:30:07 compute-0 sudo[211911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:07 compute-0 python3.9[211913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:07 compute-0 sudo[211911]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:07 compute-0 sudo[212046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsshentutwgzcfqjgdttfaqjlgsjieph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445006.9508173-1430-113832372398164/AnsiballZ_copy.py'
Jan 26 16:30:07 compute-0 sudo[212046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:07 compute-0 podman[212010]: 2026-01-26 16:30:07.981110856 +0000 UTC m=+0.060581500 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:30:08 compute-0 python3.9[212054]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769445006.9508173-1430-113832372398164/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:08 compute-0 sudo[212046]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:08 compute-0 sudo[212210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osngpkwzmrosdyzkoaznvpwexhskjuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445008.4563491-1445-50814714287015/AnsiballZ_file.py'
Jan 26 16:30:08 compute-0 sudo[212210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:08 compute-0 python3.9[212212]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:08 compute-0 sudo[212210]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:09 compute-0 podman[212237]: 2026-01-26 16:30:09.154857102 +0000 UTC m=+0.048393210 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:30:09 compute-0 sudo[212381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlhlvqlhmbgrmeayijwdpcfacsaxmdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445009.1484416-1453-152168173963984/AnsiballZ_command.py'
Jan 26 16:30:09 compute-0 sudo[212381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:09 compute-0 python3.9[212383]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:09 compute-0 sudo[212381]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:10 compute-0 sudo[212536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmwmdhqevebpnijkzczwjwwuvwtgxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445009.9400337-1461-205372301419653/AnsiballZ_blockinfile.py'
Jan 26 16:30:10 compute-0 sudo[212536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:10 compute-0 python3.9[212538]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:10 compute-0 sudo[212536]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:11 compute-0 sudo[212688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxdjpgbvwugfqlhmcjtuwfflfqjwogax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445011.00204-1470-225231026184170/AnsiballZ_command.py'
Jan 26 16:30:11 compute-0 sudo[212688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:11 compute-0 python3.9[212690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:11 compute-0 sudo[212688]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:11 compute-0 sudo[212841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jerfpdhdelniqwdibrqsojjrjcjqbcit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445011.689952-1478-229183399228633/AnsiballZ_stat.py'
Jan 26 16:30:11 compute-0 sudo[212841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:12 compute-0 python3.9[212843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:30:12 compute-0 sudo[212841]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:12 compute-0 sudo[212995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydqvdydcybmyndjxqajfzjgawzntkdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445012.3728905-1486-99865979142833/AnsiballZ_command.py'
Jan 26 16:30:12 compute-0 sudo[212995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:13 compute-0 python3.9[212997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:13 compute-0 sudo[212995]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:13 compute-0 sudo[213150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phxhloddvobfywcuicqntychmlzceddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445013.3218586-1494-83024551110591/AnsiballZ_file.py'
Jan 26 16:30:13 compute-0 sudo[213150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:13 compute-0 python3.9[213152]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:13 compute-0 sudo[213150]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:14 compute-0 podman[213177]: 2026-01-26 16:30:14.251296225 +0000 UTC m=+0.117448908 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:30:14 compute-0 sshd-session[185713]: Connection closed by 192.168.122.30 port 45882
Jan 26 16:30:14 compute-0 sshd-session[185710]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:30:14 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 26 16:30:14 compute-0 systemd[1]: session-26.scope: Consumed 1min 57.208s CPU time.
Jan 26 16:30:14 compute-0 systemd-logind[788]: Session 26 logged out. Waiting for processes to exit.
Jan 26 16:30:14 compute-0 systemd-logind[788]: Removed session 26.
Jan 26 16:30:20 compute-0 podman[213204]: 2026-01-26 16:30:20.208762067 +0000 UTC m=+0.089476011 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:30:20 compute-0 sshd-session[213227]: Accepted publickey for zuul from 192.168.122.30 port 35206 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:30:20 compute-0 systemd-logind[788]: New session 27 of user zuul.
Jan 26 16:30:20 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 26 16:30:20 compute-0 sshd-session[213227]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:30:21 compute-0 sudo[213381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agtkwtmlzsrbavqwcioqiqluoqzmekri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445020.875796-19-250101425485605/AnsiballZ_systemd_service.py'
Jan 26 16:30:21 compute-0 sudo[213381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:21 compute-0 python3.9[213383]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:30:21 compute-0 systemd[1]: Reloading.
Jan 26 16:30:21 compute-0 systemd-rc-local-generator[213412]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:30:21 compute-0 systemd-sysv-generator[213415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:30:22 compute-0 sudo[213381]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:23 compute-0 python3.9[213569]: ansible-ansible.builtin.service_facts Invoked
Jan 26 16:30:23 compute-0 network[213586]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 16:30:23 compute-0 network[213587]: 'network-scripts' will be removed from distribution in near future.
Jan 26 16:30:23 compute-0 network[213588]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.158 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.160 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.161 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.161 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.185 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.185 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.186 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.186 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.187 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:27 compute-0 podman[213661]: 2026-01-26 16:30:27.721895844 +0000 UTC m=+0.102669908 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64)
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.748 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.749 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.749 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:27 compute-0 nova_compute[185389]: 2026-01-26 16:30:27.750 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.010 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.012 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5902MB free_disk=72.47935485839844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.012 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.012 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.082 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.083 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.110 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.129 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.133 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:30:28 compute-0 nova_compute[185389]: 2026-01-26 16:30:28.133 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:29 compute-0 nova_compute[185389]: 2026-01-26 16:30:29.134 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:29 compute-0 nova_compute[185389]: 2026-01-26 16:30:29.134 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:29 compute-0 nova_compute[185389]: 2026-01-26 16:30:29.134 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:30:29 compute-0 sudo[213880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spsvgqgpcqwsdmqvihuwbdkxnjyvmlyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445028.9804106-42-219922715738868/AnsiballZ_systemd_service.py'
Jan 26 16:30:29 compute-0 sudo[213880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:29 compute-0 python3.9[213882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:30:29 compute-0 podman[201244]: time="2026-01-26T16:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:30:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 26 16:30:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Jan 26 16:30:30 compute-0 sudo[213880]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.327 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.328 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cd2b1fd0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:30:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:30:31 compute-0 openstack_network_exporter[204387]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:30:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:30:31 compute-0 openstack_network_exporter[204387]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:30:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:30:31 compute-0 sudo[214051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flkiiopcebgxjjzutxrqfoixslnmloqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445031.0600638-52-113898049451876/AnsiballZ_file.py'
Jan 26 16:30:31 compute-0 sudo[214051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:31 compute-0 podman[214016]: 2026-01-26 16:30:31.736800083 +0000 UTC m=+0.076954254 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 26 16:30:31 compute-0 python3.9[214064]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:31 compute-0 sudo[214051]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:32 compute-0 sudo[214214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wratzoybeklpgocatwqeotmpowqnvrfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445032.170963-60-151101512688964/AnsiballZ_file.py'
Jan 26 16:30:32 compute-0 sudo[214214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:32 compute-0 python3.9[214216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:32 compute-0 sudo[214214]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:33 compute-0 sudo[214366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvehqtbdjbqfqknunqycecpanitozcoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445032.9939513-69-162617836245406/AnsiballZ_command.py'
Jan 26 16:30:33 compute-0 sudo[214366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:33 compute-0 python3.9[214368]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:33 compute-0 sudo[214366]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:34 compute-0 python3.9[214520]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:30:35 compute-0 sudo[214670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlddmovfsyzvhiykjhyiewcpfipkbjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445034.986495-87-235367969394716/AnsiballZ_systemd_service.py'
Jan 26 16:30:35 compute-0 sudo[214670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:35 compute-0 python3.9[214672]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:30:35 compute-0 systemd[1]: Reloading.
Jan 26 16:30:35 compute-0 systemd-rc-local-generator[214701]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:30:35 compute-0 systemd-sysv-generator[214705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:30:35 compute-0 sudo[214670]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:36 compute-0 sudo[214857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfsqmyefcynzkwjjjrimguprelqhnglp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445036.2027361-95-73215420927260/AnsiballZ_command.py'
Jan 26 16:30:36 compute-0 sudo[214857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:36 compute-0 python3.9[214859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:30:36 compute-0 sudo[214857]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:37 compute-0 sudo[215010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzptyqjtlvdqhqfrzyuukgvfavsxcvix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445036.967277-104-226064820550585/AnsiballZ_file.py'
Jan 26 16:30:37 compute-0 sudo[215010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:37 compute-0 python3.9[215012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:37 compute-0 sudo[215010]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:38 compute-0 podman[215113]: 2026-01-26 16:30:38.196901342 +0000 UTC m=+0.071526765 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:30:38 compute-0 python3.9[215186]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:30:39 compute-0 podman[215312]: 2026-01-26 16:30:39.269000683 +0000 UTC m=+0.058087371 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Jan 26 16:30:39 compute-0 python3.9[215351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:40 compute-0 python3.9[215478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769445038.6533363-120-59057239457876/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:41 compute-0 python3.9[215628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:41 compute-0 python3.9[215749]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769445040.3737643-135-204998589908607/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:43 compute-0 sudo[215899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjzpaahpkgqydxsuzqfjryoidoeiphfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445042.0831797-153-244178533415701/AnsiballZ_getent.py'
Jan 26 16:30:43 compute-0 sudo[215899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:43 compute-0 python3.9[215901]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 26 16:30:43 compute-0 sudo[215899]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:44 compute-0 podman[216026]: 2026-01-26 16:30:44.723423619 +0000 UTC m=+0.106269971 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 16:30:44 compute-0 python3.9[216067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:45 compute-0 python3.9[216199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769445044.3183002-181-5417170626197/.source.conf _original_basename=ceilometer.conf follow=False checksum=f817847bb0474d7c55a7ad9afdea5f1400a30720 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:46 compute-0 python3.9[216349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:47 compute-0 python3.9[216470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769445045.6671736-181-44664736655503/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:47 compute-0 python3.9[216620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:48 compute-0 python3.9[216741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769445047.249332-181-64890275576387/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:49 compute-0 python3.9[216891]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:30:50 compute-0 python3.9[217043]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:30:50 compute-0 podman[217169]: 2026-01-26 16:30:50.973075865 +0000 UTC m=+0.068718441 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:30:51 compute-0 python3.9[217210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:51 compute-0 python3.9[217340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769445050.6501126-240-188393669031770/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:52 compute-0 sudo[217490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywuxtsellwkyqmwllmifznmdelcrjyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445051.941426-255-160311455374791/AnsiballZ_file.py'
Jan 26 16:30:52 compute-0 sudo[217490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:52 compute-0 python3.9[217492]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:52 compute-0 sudo[217490]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:53 compute-0 sudo[217642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpdpsxmrltrjwzlodphuqwhhplyeusqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445052.7683778-263-256187885287583/AnsiballZ_file.py'
Jan 26 16:30:53 compute-0 sudo[217642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:53 compute-0 python3.9[217644]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:53 compute-0 sudo[217642]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:53 compute-0 sudo[217794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcukvvbszmjedwzzsyrvjdirtxgztcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445053.6005213-271-227485905833645/AnsiballZ_file.py'
Jan 26 16:30:53 compute-0 sudo[217794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:54 compute-0 python3.9[217796]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:54 compute-0 sudo[217794]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:54 compute-0 sudo[217946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwojywbhrydtcgstwbnepbduxsyjwmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/AnsiballZ_stat.py'
Jan 26 16:30:54 compute-0 sudo[217946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:54 compute-0 python3.9[217948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:54 compute-0 sudo[217946]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:55 compute-0 sudo[218069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjopgjyohyammbfylbdtcmdzvlwmzzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/AnsiballZ_copy.py'
Jan 26 16:30:55 compute-0 sudo[218069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:55 compute-0 python3.9[218071]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:55 compute-0 sudo[218069]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:55 compute-0 sudo[218145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qanqzykuxlpyyazjmcyqwhkkrtqxdiku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/AnsiballZ_stat.py'
Jan 26 16:30:55 compute-0 sudo[218145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:55 compute-0 python3.9[218147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:56 compute-0 sudo[218145]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:56 compute-0 sudo[218268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgnrbsdzogmcvvemjqmnylxbdbttdiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/AnsiballZ_copy.py'
Jan 26 16:30:56 compute-0 sudo[218268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:57 compute-0 python3.9[218270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769445054.3307993-279-93012180024233/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:57 compute-0 sudo[218268]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:57 compute-0 sudo[218420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfinulhzxuogbhbimepsprzoozevfuko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445057.2483304-279-23882881466274/AnsiballZ_stat.py'
Jan 26 16:30:57 compute-0 sudo[218420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:57 compute-0 python3.9[218422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:30:57 compute-0 sudo[218420]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:58 compute-0 podman[218493]: 2026-01-26 16:30:58.18742839 +0000 UTC m=+0.068312529 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Jan 26 16:30:58 compute-0 sudo[218564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uziaimrswyvpnvxhyepmbzkpqyebulxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445057.2483304-279-23882881466274/AnsiballZ_copy.py'
Jan 26 16:30:58 compute-0 sudo[218564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:58 compute-0 python3.9[218566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769445057.2483304-279-23882881466274/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:30:58 compute-0 sudo[218564]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:59 compute-0 sudo[218716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saznvsjsvygxkwqzpskizpzkhsvqmckx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445058.8049283-321-168670782858280/AnsiballZ_file.py'
Jan 26 16:30:59 compute-0 sudo[218716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:30:59 compute-0 python3.9[218718]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:30:59 compute-0 sudo[218716]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:59 compute-0 podman[201244]: time="2026-01-26T16:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:30:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 26 16:30:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Jan 26 16:31:00 compute-0 sudo[218869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxhdczkugdgoocmpqpwgikxazbpsdig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445059.7362423-329-129188810951858/AnsiballZ_file.py'
Jan 26 16:31:00 compute-0 sudo[218869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:00 compute-0 python3.9[218871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:31:00 compute-0 sudo[218869]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:01 compute-0 sudo[219021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdwugzanphgvhvlgkipwdiskcjksikpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445060.947428-337-160809077080920/AnsiballZ_stat.py'
Jan 26 16:31:01 compute-0 sudo[219021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:01 compute-0 openstack_network_exporter[204387]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:31:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:31:01 compute-0 openstack_network_exporter[204387]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:31:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:31:01 compute-0 python3.9[219023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:31:01 compute-0 sudo[219021]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:31:01.706 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:31:01.706 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:31:01.707 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:01 compute-0 sudo[219158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofgjvaagtmhrbumzrdxzjccpxvzuxbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445060.947428-337-160809077080920/AnsiballZ_copy.py'
Jan 26 16:31:01 compute-0 sudo[219158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:01 compute-0 podman[219118]: 2026-01-26 16:31:01.989644997 +0000 UTC m=+0.069886642 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 26 16:31:02 compute-0 python3.9[219162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769445060.947428-337-160809077080920/.source.json _original_basename=.lw_z3d2r follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:02 compute-0 sudo[219158]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:02 compute-0 python3.9[219316]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:06 compute-0 sudo[219737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtvdxmhusdwgnmovvweoawkjbdkshwue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445065.161389-377-240413124310391/AnsiballZ_container_config_data.py'
Jan 26 16:31:06 compute-0 sudo[219737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:06 compute-0 python3.9[219739]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 26 16:31:06 compute-0 sudo[219737]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:07 compute-0 sudo[219889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbxllaglotxxhnugwkoghilvpvjzwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445066.588448-388-55408766876697/AnsiballZ_container_config_hash.py'
Jan 26 16:31:07 compute-0 sudo[219889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:07 compute-0 python3.9[219891]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:31:07 compute-0 sudo[219889]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:08 compute-0 sudo[220056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvjxabtwkyzvmreefcfffypnrgkxvoh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445067.8115008-398-242024497541029/AnsiballZ_edpm_container_manage.py'
Jan 26 16:31:08 compute-0 sudo[220056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:08 compute-0 podman[220015]: 2026-01-26 16:31:08.423316389 +0000 UTC m=+0.090688688 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:31:08 compute-0 python3[220062]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:31:08 compute-0 podman[220099]: 2026-01-26 16:31:08.855035891 +0000 UTC m=+0.058484322 container create 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 26 16:31:08 compute-0 podman[220099]: 2026-01-26 16:31:08.819554846 +0000 UTC m=+0.023003367 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 26 16:31:08 compute-0 python3[220062]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 26 16:31:08 compute-0 sudo[220056]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:09 compute-0 sudo[220304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgcwjwjhgfswmcuzcgrgiayiaptwsqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445069.5921357-406-218550364439272/AnsiballZ_stat.py'
Jan 26 16:31:09 compute-0 podman[220261]: 2026-01-26 16:31:09.903776795 +0000 UTC m=+0.055903561 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:31:09 compute-0 sudo[220304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:10 compute-0 python3.9[220308]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:31:10 compute-0 sudo[220304]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:10 compute-0 sudo[220461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdpkrwbswkfvyhxjatwafakihmfwryfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445070.342817-415-232136977727894/AnsiballZ_file.py'
Jan 26 16:31:10 compute-0 sudo[220461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:10 compute-0 python3.9[220463]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:10 compute-0 sudo[220461]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:11 compute-0 sudo[220537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqrowotoiwobsznwqhwundnmkuasgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445070.342817-415-232136977727894/AnsiballZ_stat.py'
Jan 26 16:31:11 compute-0 sudo[220537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:11 compute-0 python3.9[220539]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:31:11 compute-0 sudo[220537]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:12 compute-0 sudo[220688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewawentlhvtbtaqlmjpctsrazrztpou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445071.536879-415-61646197548135/AnsiballZ_copy.py'
Jan 26 16:31:12 compute-0 sudo[220688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:12 compute-0 python3.9[220690]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769445071.536879-415-61646197548135/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:12 compute-0 sudo[220688]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:12 compute-0 sudo[220764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krikgbtgeotycezoochevsqrlvbxkquu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445071.536879-415-61646197548135/AnsiballZ_systemd.py'
Jan 26 16:31:12 compute-0 sudo[220764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:13 compute-0 python3.9[220766]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:31:13 compute-0 systemd[1]: Reloading.
Jan 26 16:31:13 compute-0 systemd-rc-local-generator[220790]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:31:13 compute-0 systemd-sysv-generator[220795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:31:13 compute-0 sudo[220764]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:13 compute-0 sudo[220875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osnrfmjcxihcxcvytavmiyglpdufmykv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445071.536879-415-61646197548135/AnsiballZ_systemd.py'
Jan 26 16:31:13 compute-0 sudo[220875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:14 compute-0 python3.9[220877]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:31:14 compute-0 systemd[1]: Reloading.
Jan 26 16:31:14 compute-0 systemd-rc-local-generator[220910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:31:14 compute-0 systemd-sysv-generator[220913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:31:14 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 26 16:31:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.
Jan 26 16:31:14 compute-0 podman[220917]: 2026-01-26 16:31:14.870441465 +0000 UTC m=+0.158189423 container init 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: + sudo -E kolla_set_configs
Jan 26 16:31:14 compute-0 podman[220917]: 2026-01-26 16:31:14.899037703 +0000 UTC m=+0.186785621 container start 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:31:14 compute-0 podman[220917]: ceilometer_agent_ipmi
Jan 26 16:31:14 compute-0 sudo[220958]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 26 16:31:14 compute-0 sudo[220958]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:14 compute-0 sudo[220958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:14 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 26 16:31:14 compute-0 podman[220959]: 2026-01-26 16:31:14.962639602 +0000 UTC m=+0.052061076 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:31:14 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-779a1b17fe2d351.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:31:14 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-779a1b17fe2d351.service: Failed with result 'exit-code'.
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Validating config file
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Copying service configuration files
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:31:14 compute-0 ceilometer_agent_ipmi[220933]: INFO:__main__:Writing out command to execute
Jan 26 16:31:14 compute-0 sudo[220875]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:14 compute-0 sudo[220958]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: ++ cat /run_command
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + ARGS=
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + sudo kolla_copy_cacerts
Jan 26 16:31:15 compute-0 podman[220930]: 2026-01-26 16:31:15.005098837 +0000 UTC m=+0.196259239 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:31:15 compute-0 sudo[220985]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 26 16:31:15 compute-0 sudo[220985]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:15 compute-0 sudo[220985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:15 compute-0 sudo[220985]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + [[ ! -n '' ]]
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + . kolla_extend_start
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + umask 0022
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.943 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.943 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.943 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.943 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.944 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.945 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.946 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.947 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.948 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.949 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.950 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.951 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.952 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.953 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.954 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.955 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.956 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.957 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.958 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.959 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.960 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.961 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.962 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.981 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.983 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 26 16:31:15 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:15.984 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 26 16:31:16 compute-0 python3.9[221137]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.089 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp0nf7m4pk/privsep.sock']
Jan 26 16:31:16 compute-0 sudo[221166]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp0nf7m4pk/privsep.sock
Jan 26 16:31:16 compute-0 sudo[221166]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:16 compute-0 sudo[221166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:16 compute-0 sudo[221166]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.778 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.779 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0nf7m4pk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.652 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.657 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.659 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.660 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.881 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.881 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.882 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.882 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.882 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.882 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.883 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.886 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.886 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.886 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.886 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.887 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.888 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.889 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.890 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.891 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.892 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.893 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.894 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.895 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.896 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.897 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.898 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.899 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.900 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.907 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.907 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 26 16:31:16 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:16.909 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 26 16:31:17 compute-0 sudo[221300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxklxufyanpkiqpxdftfizwxucsgtqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445076.7516048-460-184037909433367/AnsiballZ_stat.py'
Jan 26 16:31:17 compute-0 sudo[221300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:17 compute-0 python3.9[221302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:31:17 compute-0 sudo[221300]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:18 compute-0 sudo[221425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igzztvofedlxtrtpynrqywmzfqgxxvkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445076.7516048-460-184037909433367/AnsiballZ_copy.py'
Jan 26 16:31:18 compute-0 sudo[221425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:18 compute-0 python3.9[221427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769445076.7516048-460-184037909433367/.source.yaml _original_basename=.w0bsto81 follow=False checksum=174a178d8e89a2969057d975e269d8b1cd7b5ff6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:18 compute-0 sudo[221425]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:19 compute-0 sudo[221577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdrzmnmaqooxpmwywlczvmvahrvdvota ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445078.7853546-477-213423421372818/AnsiballZ_file.py'
Jan 26 16:31:19 compute-0 sudo[221577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:19 compute-0 python3.9[221579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:19 compute-0 sudo[221577]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:20 compute-0 sudo[221729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwndyufvtzdihhartblvuxkbwdqzxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445079.820065-485-115476166412580/AnsiballZ_file.py'
Jan 26 16:31:20 compute-0 sudo[221729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:20 compute-0 python3.9[221731]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 16:31:20 compute-0 sudo[221729]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:21 compute-0 podman[221732]: 2026-01-26 16:31:21.54725781 +0000 UTC m=+0.063203111 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:31:22 compute-0 python3.9[221906]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:25 compute-0 nova_compute[185389]: 2026-01-26 16:31:25.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:25 compute-0 nova_compute[185389]: 2026-01-26 16:31:25.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:26 compute-0 nova_compute[185389]: 2026-01-26 16:31:26.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:26 compute-0 sudo[222327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debuouzqpavfclkthxncfijdgmckdfte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445086.3920093-519-133282401254697/AnsiballZ_container_config_data.py'
Jan 26 16:31:26 compute-0 sudo[222327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:26 compute-0 nova_compute[185389]: 2026-01-26 16:31:26.922 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:27 compute-0 python3.9[222329]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 26 16:31:27 compute-0 sudo[222327]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.736 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.736 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.737 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.772 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.772 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.773 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:27 compute-0 nova_compute[185389]: 2026-01-26 16:31:27.773 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:31:27 compute-0 sudo[222479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jplwlofdnxsskqjbjtwdniccozjdawhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445087.5270734-530-175847056118757/AnsiballZ_container_config_hash.py'
Jan 26 16:31:27 compute-0 sudo[222479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.079 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.080 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5777MB free_disk=72.4778060913086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.080 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.080 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:28 compute-0 python3.9[222481]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 16:31:28 compute-0 sudo[222479]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.189 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.189 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.216 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.250 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.252 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:31:28 compute-0 nova_compute[185389]: 2026-01-26 16:31:28.252 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:28 compute-0 sudo[222649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndlnaxaxjccysaiocuhysgdbxuksfcrf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445088.4552321-540-36641667648520/AnsiballZ_edpm_container_manage.py'
Jan 26 16:31:28 compute-0 sudo[222649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:28 compute-0 podman[222605]: 2026-01-26 16:31:28.849332582 +0000 UTC m=+0.097078731 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6)
Jan 26 16:31:29 compute-0 python3[222653]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 16:31:29 compute-0 nova_compute[185389]: 2026-01-26 16:31:29.234 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:29 compute-0 nova_compute[185389]: 2026-01-26 16:31:29.234 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:31:29 compute-0 podman[222690]: 2026-01-26 16:31:29.322477251 +0000 UTC m=+0.052098667 container create d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:31:29 compute-0 podman[222690]: 2026-01-26 16:31:29.290774929 +0000 UTC m=+0.020396325 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 26 16:31:29 compute-0 python3[222653]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 26 16:31:29 compute-0 sudo[222649]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:29 compute-0 podman[201244]: time="2026-01-26T16:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:31:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27270 "" "Go-http-client/1.1"
Jan 26 16:31:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3437 "" "Go-http-client/1.1"
Jan 26 16:31:30 compute-0 sudo[222879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novevhreuayczovyraymymyzznygmutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445089.7638319-548-226956405172002/AnsiballZ_stat.py'
Jan 26 16:31:30 compute-0 sudo[222879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:30 compute-0 python3.9[222881]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:31:30 compute-0 sudo[222879]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:30 compute-0 nova_compute[185389]: 2026-01-26 16:31:30.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:31 compute-0 sudo[223033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtslbmjyktzqvoghncyirocowogqwalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445090.76704-557-126794463129249/AnsiballZ_file.py'
Jan 26 16:31:31 compute-0 sudo[223033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:31 compute-0 python3.9[223035]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:31 compute-0 sudo[223033]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:31 compute-0 openstack_network_exporter[204387]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:31:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:31:31 compute-0 openstack_network_exporter[204387]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:31:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:31:31 compute-0 sudo[223109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulytqrvfyniqpvcmyxkpvmkwutwpxive ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445090.76704-557-126794463129249/AnsiballZ_stat.py'
Jan 26 16:31:31 compute-0 sudo[223109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:31 compute-0 python3.9[223111]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:31:31 compute-0 sudo[223109]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:32 compute-0 podman[223172]: 2026-01-26 16:31:32.164510073 +0000 UTC m=+0.053170507 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 26 16:31:32 compute-0 sudo[223280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqolmfvzzuxccfkrdwuchcuxdozjyuqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445091.8890343-557-158657968126935/AnsiballZ_copy.py'
Jan 26 16:31:32 compute-0 sudo[223280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:32 compute-0 python3.9[223282]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769445091.8890343-557-158657968126935/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:32 compute-0 sudo[223280]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:32 compute-0 sudo[223356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwzvokyqjoeqtxihavbwzggtwbyqgnck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445091.8890343-557-158657968126935/AnsiballZ_systemd.py'
Jan 26 16:31:32 compute-0 sudo[223356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:33 compute-0 python3.9[223358]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 16:31:33 compute-0 systemd[1]: Reloading.
Jan 26 16:31:33 compute-0 systemd-rc-local-generator[223383]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:31:33 compute-0 systemd-sysv-generator[223387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:31:33 compute-0 sudo[223356]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:33 compute-0 sudo[223467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evsibadpjoeixftgamrultedjikbmjry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445091.8890343-557-158657968126935/AnsiballZ_systemd.py'
Jan 26 16:31:33 compute-0 sudo[223467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:33 compute-0 python3.9[223469]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 16:31:34 compute-0 systemd[1]: Reloading.
Jan 26 16:31:34 compute-0 systemd-sysv-generator[223502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 16:31:34 compute-0 systemd-rc-local-generator[223497]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 16:31:35 compute-0 systemd[1]: Starting kepler container...
Jan 26 16:31:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.
Jan 26 16:31:36 compute-0 podman[223508]: 2026-01-26 16:31:36.439012321 +0000 UTC m=+0.747518897 container init d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Jan 26 16:31:36 compute-0 kepler[223523]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.475448       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.475624       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.475643       1 config.go:295] kernel version: 5.14
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.476548       1 power.go:78] Unable to obtain power, use estimate method
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.476573       1 redfish.go:169] failed to get redfish credential file path
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.477111       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.477127       1 power.go:79] using none to obtain power
Jan 26 16:31:36 compute-0 kepler[223523]: E0126 16:31:36.477143       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 26 16:31:36 compute-0 kepler[223523]: E0126 16:31:36.477176       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 26 16:31:36 compute-0 kepler[223523]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 26 16:31:36 compute-0 kepler[223523]: I0126 16:31:36.479174       1 exporter.go:84] Number of CPUs: 8
Jan 26 16:31:36 compute-0 podman[223508]: 2026-01-26 16:31:36.480483987 +0000 UTC m=+0.788990503 container start d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 16:31:36 compute-0 podman[223508]: kepler
Jan 26 16:31:36 compute-0 systemd[1]: Started kepler container.
Jan 26 16:31:36 compute-0 sudo[223467]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:36 compute-0 podman[223533]: 2026-01-26 16:31:36.592988287 +0000 UTC m=+0.092996771 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=kepler, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, io.openshift.expose-services=)
Jan 26 16:31:36 compute-0 systemd[1]: d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-2368f6ee2adb4833.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:31:36 compute-0 systemd[1]: d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-2368f6ee2adb4833.service: Failed with result 'exit-code'.
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.025137       1 watcher.go:83] Using in cluster k8s config
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.025834       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 26 16:31:37 compute-0 kepler[223523]: E0126 16:31:37.026308       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.032552       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.032904       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.040776       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.041043       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.050708       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.050760       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.050778       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060655       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060717       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060726       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060734       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060744       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060765       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060894       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.060988       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.061027       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.061055       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.061192       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 26 16:31:37 compute-0 kepler[223523]: I0126 16:31:37.062089       1 exporter.go:208] Started Kepler in 586.944889ms
Jan 26 16:31:37 compute-0 python3.9[223717]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 16:31:38 compute-0 podman[223841]: 2026-01-26 16:31:38.975198618 +0000 UTC m=+0.112819538 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:31:38 compute-0 sudo[223883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbpilvfihfeszsewaayrultaguooope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445098.4232683-602-7843905936939/AnsiballZ_stat.py'
Jan 26 16:31:39 compute-0 sudo[223883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:39 compute-0 python3.9[223891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:31:39 compute-0 sudo[223883]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:40 compute-0 sudo[224029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmeihmodwsgyaehnvtylfogoubwfitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445098.4232683-602-7843905936939/AnsiballZ_copy.py'
Jan 26 16:31:40 compute-0 sudo[224029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:40 compute-0 podman[223988]: 2026-01-26 16:31:40.227513618 +0000 UTC m=+0.108640765 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:31:40 compute-0 python3.9[224035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769445098.4232683-602-7843905936939/.source.yaml _original_basename=.gidvh4ph follow=False checksum=890735d31eaa744bfe12b9ebf8e5f893dbfd2aec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:40 compute-0 sudo[224029]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:41 compute-0 sudo[224185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxxdhpjuqdrbuwbblzlnojpawitpcmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445100.7930515-617-45138320784520/AnsiballZ_systemd.py'
Jan 26 16:31:41 compute-0 sudo[224185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:41 compute-0 python3.9[224187]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:31:41 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 26 16:31:41 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:41.741 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 26 16:31:41 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:41.843 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 26 16:31:41 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:41.844 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 26 16:31:41 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:41.844 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 26 16:31:41 compute-0 ceilometer_agent_ipmi[220933]: 2026-01-26 16:31:41.855 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 26 16:31:42 compute-0 systemd[1]: libpod-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope: Deactivated successfully.
Jan 26 16:31:42 compute-0 systemd[1]: libpod-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope: Consumed 2.341s CPU time.
Jan 26 16:31:42 compute-0 podman[224191]: 2026-01-26 16:31:42.060432693 +0000 UTC m=+0.371162202 container stop 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:31:42 compute-0 podman[224191]: 2026-01-26 16:31:42.087931881 +0000 UTC m=+0.398661390 container died 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:31:42 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-779a1b17fe2d351.timer: Deactivated successfully.
Jan 26 16:31:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.
Jan 26 16:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-userdata-shm.mount: Deactivated successfully.
Jan 26 16:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67-merged.mount: Deactivated successfully.
Jan 26 16:31:42 compute-0 podman[224191]: 2026-01-26 16:31:42.144772937 +0000 UTC m=+0.455502446 container cleanup 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:31:42 compute-0 podman[224191]: ceilometer_agent_ipmi
Jan 26 16:31:42 compute-0 podman[224217]: ceilometer_agent_ipmi
Jan 26 16:31:42 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 26 16:31:42 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 26 16:31:42 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 26 16:31:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82489844db108fd9a1d3c0e82d1a18c3077297405e35ee18d1ea41653ea4c67/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.
Jan 26 16:31:42 compute-0 podman[224227]: 2026-01-26 16:31:42.398188187 +0000 UTC m=+0.149597989 container init 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + sudo -E kolla_set_configs
Jan 26 16:31:42 compute-0 podman[224227]: 2026-01-26 16:31:42.432086029 +0000 UTC m=+0.183495841 container start 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:31:42 compute-0 podman[224227]: ceilometer_agent_ipmi
Jan 26 16:31:42 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 26 16:31:42 compute-0 sudo[224248]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 26 16:31:42 compute-0 sudo[224248]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:42 compute-0 sudo[224248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:42 compute-0 sudo[224185]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Validating config file
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Copying service configuration files
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: INFO:__main__:Writing out command to execute
Jan 26 16:31:42 compute-0 sudo[224248]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: ++ cat /run_command
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + ARGS=
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + sudo kolla_copy_cacerts
Jan 26 16:31:42 compute-0 podman[224249]: 2026-01-26 16:31:42.537355761 +0000 UTC m=+0.086304897 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:31:42 compute-0 sudo[224280]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 26 16:31:42 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-216c0ba43bd15fde.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:31:42 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-216c0ba43bd15fde.service: Failed with result 'exit-code'.
Jan 26 16:31:42 compute-0 sudo[224280]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:42 compute-0 sudo[224280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:42 compute-0 sudo[224280]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + [[ ! -n '' ]]
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + . kolla_extend_start
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + umask 0022
Jan 26 16:31:42 compute-0 ceilometer_agent_ipmi[224242]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 26 16:31:43 compute-0 sudo[224423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgpqinhhzmyawzxislqewtxgxizcrvib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445102.7113187-625-239487496770587/AnsiballZ_systemd.py'
Jan 26 16:31:43 compute-0 sudo[224423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.429 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.430 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.431 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.431 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.431 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.432 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.432 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.432 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.433 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.434 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.435 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.436 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.437 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.438 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.439 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.440 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.441 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.442 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.443 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.444 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.445 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.446 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.447 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.448 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.449 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.468 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 26 16:31:43 compute-0 python3.9[224425]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.470 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.471 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 26 16:31:43 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:43.485 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpltnkf_16/privsep.sock']
Jan 26 16:31:43 compute-0 sudo[224431]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpltnkf_16/privsep.sock
Jan 26 16:31:43 compute-0 sudo[224431]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 16:31:43 compute-0 systemd[1]: Stopping kepler container...
Jan 26 16:31:43 compute-0 sudo[224431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 26 16:31:43 compute-0 kepler[223523]: I0126 16:31:43.611354       1 exporter.go:218] Received shutdown signal
Jan 26 16:31:43 compute-0 kepler[223523]: I0126 16:31:43.612615       1 exporter.go:226] Exiting...
Jan 26 16:31:43 compute-0 systemd[1]: libpod-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope: Deactivated successfully.
Jan 26 16:31:43 compute-0 systemd[1]: libpod-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope: Consumed 1.002s CPU time.
Jan 26 16:31:43 compute-0 podman[224434]: 2026-01-26 16:31:43.790201376 +0000 UTC m=+0.244318225 container died d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:31:43 compute-0 systemd[1]: d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-2368f6ee2adb4833.timer: Deactivated successfully.
Jan 26 16:31:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.
Jan 26 16:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-userdata-shm.mount: Deactivated successfully.
Jan 26 16:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-177c547509170b484e3412aeda98dd878c4799e6d17d45696d08e4df3d5684f1-merged.mount: Deactivated successfully.
Jan 26 16:31:43 compute-0 podman[224434]: 2026-01-26 16:31:43.827628463 +0000 UTC m=+0.281745332 container cleanup d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9)
Jan 26 16:31:43 compute-0 podman[224434]: kepler
Jan 26 16:31:43 compute-0 podman[224463]: kepler
Jan 26 16:31:43 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 26 16:31:43 compute-0 systemd[1]: Stopped kepler container.
Jan 26 16:31:43 compute-0 systemd[1]: Starting kepler container...
Jan 26 16:31:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.
Jan 26 16:31:44 compute-0 podman[224477]: 2026-01-26 16:31:44.009740764 +0000 UTC m=+0.104093321 container init d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Jan 26 16:31:44 compute-0 podman[224477]: 2026-01-26 16:31:44.031082645 +0000 UTC m=+0.125435152 container start d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Jan 26 16:31:44 compute-0 podman[224477]: kepler
Jan 26 16:31:44 compute-0 kepler[224493]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 26 16:31:44 compute-0 systemd[1]: Started kepler container.
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.050166       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.050380       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.050410       1 config.go:295] kernel version: 5.14
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.051081       1 power.go:78] Unable to obtain power, use estimate method
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.051119       1 redfish.go:169] failed to get redfish credential file path
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.051663       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.051682       1 power.go:79] using none to obtain power
Jan 26 16:31:44 compute-0 kepler[224493]: E0126 16:31:44.051703       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 26 16:31:44 compute-0 kepler[224493]: E0126 16:31:44.051736       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 26 16:31:44 compute-0 kepler[224493]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.054751       1 exporter.go:84] Number of CPUs: 8
Jan 26 16:31:44 compute-0 sudo[224423]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:44 compute-0 podman[224504]: 2026-01-26 16:31:44.130829387 +0000 UTC m=+0.080622363 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, config_id=kepler, io.buildah.version=1.29.0, name=ubi9, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release-0.7.12=, distribution-scope=public)
Jan 26 16:31:44 compute-0 systemd[1]: d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-72ad5c9a9d33d081.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:31:44 compute-0 systemd[1]: d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285-72ad5c9a9d33d081.service: Failed with result 'exit-code'.
Jan 26 16:31:44 compute-0 sudo[224431]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.172 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.173 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpltnkf_16/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.052 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.056 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.058 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.058 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.293 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.294 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.295 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.295 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.295 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.295 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.296 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.297 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.300 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.300 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.300 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.301 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.301 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.301 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.301 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.301 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.302 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.303 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.304 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.305 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.306 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.307 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.308 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.309 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.310 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.311 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.312 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.313 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.313 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.313 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.313 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.314 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.314 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.314 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.314 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.314 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.315 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.315 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.315 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.315 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.315 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.316 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.316 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.316 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.316 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.317 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.318 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.319 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.320 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.321 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.322 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.323 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.324 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.325 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.326 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.327 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.328 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.329 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.330 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.330 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.330 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.330 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 26 16:31:44 compute-0 ceilometer_agent_ipmi[224242]: 2026-01-26 16:31:44.332 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 26 16:31:44 compute-0 sudo[224682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jztsxeriqvizsrjtzeyklnxofmbfxzqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445104.2677999-633-52134868458006/AnsiballZ_find.py'
Jan 26 16:31:44 compute-0 sudo[224682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.627501       1 watcher.go:83] Using in cluster k8s config
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.627548       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 26 16:31:44 compute-0 kepler[224493]: E0126 16:31:44.627576       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.631489       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.631530       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.636928       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.637039       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.646392       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.646438       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.646455       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663219       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663259       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663264       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663269       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663276       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663291       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663365       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663390       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663409       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663423       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.663486       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 26 16:31:44 compute-0 kepler[224493]: I0126 16:31:44.664236       1 exporter.go:208] Started Kepler in 614.619702ms
Jan 26 16:31:44 compute-0 python3.9[224684]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 16:31:44 compute-0 sudo[224682]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:45 compute-0 podman[224719]: 2026-01-26 16:31:45.278791659 +0000 UTC m=+0.165475020 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:31:46 compute-0 sudo[224869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peeesneubbgmbrhlvesksdmgkpfqkrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445105.398481-643-253092113031020/AnsiballZ_podman_container_info.py'
Jan 26 16:31:46 compute-0 sudo[224869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:46 compute-0 python3.9[224871]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 26 16:31:46 compute-0 sudo[224869]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:47 compute-0 sudo[225034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkfwkrsyjytzipozjpnxoyebijpvvyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445106.6221552-651-81439038134608/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:47 compute-0 sudo[225034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:47 compute-0 python3.9[225036]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:47 compute-0 systemd[1]: Started libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope.
Jan 26 16:31:47 compute-0 podman[225037]: 2026-01-26 16:31:47.744349827 +0000 UTC m=+0.123051537 container exec 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:31:47 compute-0 podman[225037]: 2026-01-26 16:31:47.777759495 +0000 UTC m=+0.156461175 container exec_died 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 16:31:47 compute-0 systemd[1]: libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope: Deactivated successfully.
Jan 26 16:31:47 compute-0 sudo[225034]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:48 compute-0 sudo[225215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lepswrgksylnjbjmbessgdawovdccmux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445108.161469-659-277612304927066/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:48 compute-0 sudo[225215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:48 compute-0 python3.9[225217]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:48 compute-0 systemd[1]: Started libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope.
Jan 26 16:31:48 compute-0 podman[225218]: 2026-01-26 16:31:48.977209409 +0000 UTC m=+0.208071309 container exec 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:31:48 compute-0 podman[225218]: 2026-01-26 16:31:48.984886838 +0000 UTC m=+0.215748738 container exec_died 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:31:49 compute-0 sudo[225215]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:49 compute-0 systemd[1]: libpod-conmon-6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d.scope: Deactivated successfully.
Jan 26 16:31:49 compute-0 sudo[225399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjhrwqvrwirfpubakzktbwewreqvycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445109.2744544-667-146509887376506/AnsiballZ_file.py'
Jan 26 16:31:49 compute-0 sudo[225399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:50 compute-0 python3.9[225401]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:50 compute-0 sudo[225399]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:50 compute-0 sudo[225551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awiwzosdrprwjurqeaerfaiujoippyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445110.3727243-676-251739291039468/AnsiballZ_podman_container_info.py'
Jan 26 16:31:50 compute-0 sudo[225551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:51 compute-0 python3.9[225553]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 26 16:31:51 compute-0 sudo[225551]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:51 compute-0 sudo[225730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfsebwatlecvobyxetfvebqkmgqbtcdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445111.3840487-684-273580019767123/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:51 compute-0 sudo[225730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:51 compute-0 podman[225690]: 2026-01-26 16:31:51.843616594 +0000 UTC m=+0.123130788 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:31:52 compute-0 python3.9[225739]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:52 compute-0 systemd[1]: Started libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope.
Jan 26 16:31:52 compute-0 podman[225740]: 2026-01-26 16:31:52.216191854 +0000 UTC m=+0.151515291 container exec 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 26 16:31:52 compute-0 podman[225740]: 2026-01-26 16:31:52.24985894 +0000 UTC m=+0.185182367 container exec_died 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:31:52 compute-0 systemd[1]: libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope: Deactivated successfully.
Jan 26 16:31:52 compute-0 sudo[225730]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:52 compute-0 sudo[225921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpgtmyhntkczmownjqkbmgdpefzzhbad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445112.5221055-692-154018643917466/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:52 compute-0 sudo[225921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:53 compute-0 python3.9[225923]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:53 compute-0 systemd[1]: Started libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope.
Jan 26 16:31:53 compute-0 podman[225924]: 2026-01-26 16:31:53.351194705 +0000 UTC m=+0.131025194 container exec 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:31:53 compute-0 podman[225924]: 2026-01-26 16:31:53.385565709 +0000 UTC m=+0.165396228 container exec_died 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 26 16:31:53 compute-0 systemd[1]: libpod-conmon-881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6.scope: Deactivated successfully.
Jan 26 16:31:53 compute-0 sudo[225921]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:54 compute-0 sudo[226107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxflifowgpwgxslzotpdidqtbnzlbyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445113.6914253-700-81459100801441/AnsiballZ_file.py'
Jan 26 16:31:54 compute-0 sudo[226107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:54 compute-0 python3.9[226109]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:54 compute-0 sudo[226107]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:55 compute-0 sudo[226259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzhqjwccqntxcfjvwnyfsoplhfxsotd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445114.6379445-709-259667529163539/AnsiballZ_podman_container_info.py'
Jan 26 16:31:55 compute-0 sudo[226259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:55 compute-0 python3.9[226261]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 26 16:31:55 compute-0 sudo[226259]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:56 compute-0 sudo[226424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjckespnpzyoksotspipstshtkdcyxuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445115.6856494-717-62178993986483/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:56 compute-0 sudo[226424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:56 compute-0 python3.9[226426]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:56 compute-0 systemd[1]: Started libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope.
Jan 26 16:31:56 compute-0 podman[226427]: 2026-01-26 16:31:56.43421321 +0000 UTC m=+0.095582000 container exec 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 16:31:56 compute-0 podman[226427]: 2026-01-26 16:31:56.466148419 +0000 UTC m=+0.127517199 container exec_died 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 26 16:31:56 compute-0 systemd[1]: libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope: Deactivated successfully.
Jan 26 16:31:56 compute-0 sudo[226424]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:57 compute-0 sudo[226608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfecgguyihdaymvieyfsbkeheeqxthkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445116.7583876-725-196380862189262/AnsiballZ_podman_container_exec.py'
Jan 26 16:31:57 compute-0 sudo[226608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:57 compute-0 python3.9[226610]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:31:57 compute-0 systemd[1]: Started libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope.
Jan 26 16:31:57 compute-0 podman[226611]: 2026-01-26 16:31:57.585792681 +0000 UTC m=+0.162961362 container exec 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS)
Jan 26 16:31:57 compute-0 podman[226611]: 2026-01-26 16:31:57.626129028 +0000 UTC m=+0.203297639 container exec_died 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 26 16:31:57 compute-0 sudo[226608]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:57 compute-0 systemd[1]: libpod-conmon-5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0.scope: Deactivated successfully.
Jan 26 16:31:58 compute-0 sudo[226790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyclkrqrwdojrtchtryehezjbbxzabsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445117.955172-733-97418271954213/AnsiballZ_file.py'
Jan 26 16:31:58 compute-0 sudo[226790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:58 compute-0 python3.9[226792]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:31:58 compute-0 sudo[226790]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:59 compute-0 podman[226892]: 2026-01-26 16:31:59.196721492 +0000 UTC m=+0.084475178 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 26 16:31:59 compute-0 sudo[226962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigwjyrxwmaziipmnwwmrvjxldysehjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445118.8892047-742-162730431958791/AnsiballZ_podman_container_info.py'
Jan 26 16:31:59 compute-0 sudo[226962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:31:59 compute-0 python3.9[226964]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 26 16:31:59 compute-0 sudo[226962]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:59 compute-0 podman[201244]: time="2026-01-26T16:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:31:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 26 16:31:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 26 16:32:00 compute-0 sudo[227126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvhepbiwnbmhwcjlhpfdncgjlvzejlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445119.7801-750-190828891614069/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:00 compute-0 sudo[227126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:00 compute-0 python3.9[227128]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:00 compute-0 systemd[1]: Started libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope.
Jan 26 16:32:00 compute-0 podman[227129]: 2026-01-26 16:32:00.503900274 +0000 UTC m=+0.128228038 container exec 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:32:00 compute-0 podman[227129]: 2026-01-26 16:32:00.538299499 +0000 UTC m=+0.162627243 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:32:00 compute-0 systemd[1]: libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope: Deactivated successfully.
Jan 26 16:32:00 compute-0 sudo[227126]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:01 compute-0 sudo[227308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqcdyztzslgivjgknscueyzjuoqonie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445120.7970955-758-127160804223793/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:01 compute-0 sudo[227308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:01 compute-0 python3.9[227310]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:01 compute-0 openstack_network_exporter[204387]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:32:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:32:01 compute-0 openstack_network_exporter[204387]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:32:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:32:01 compute-0 systemd[1]: Started libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope.
Jan 26 16:32:01 compute-0 podman[227311]: 2026-01-26 16:32:01.563494164 +0000 UTC m=+0.137633723 container exec 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:32:01 compute-0 podman[227311]: 2026-01-26 16:32:01.598454055 +0000 UTC m=+0.172593554 container exec_died 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:32:01 compute-0 systemd[1]: libpod-conmon-89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633.scope: Deactivated successfully.
Jan 26 16:32:01 compute-0 sudo[227308]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:32:01.707 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:32:01.709 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:32:01.709 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:02 compute-0 sudo[227507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harshpcdubohwgwxoxknkdgnzdsouyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445121.9497383-766-146356625679797/AnsiballZ_file.py'
Jan 26 16:32:02 compute-0 sudo[227507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:02 compute-0 podman[227465]: 2026-01-26 16:32:02.436129801 +0000 UTC m=+0.084388615 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 16:32:02 compute-0 python3.9[227512]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:02 compute-0 sudo[227507]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:03 compute-0 sudo[227662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzrsarnvjmezpdzbjmksjnvthblsytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445122.9531293-775-196009627975262/AnsiballZ_podman_container_info.py'
Jan 26 16:32:03 compute-0 sudo[227662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:03 compute-0 python3.9[227664]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 26 16:32:03 compute-0 sudo[227662]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:04 compute-0 sudo[227825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njewtupxkfezocwymhadknpxfhmauiju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445123.9501367-783-236179846511492/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:04 compute-0 sudo[227825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:04 compute-0 python3.9[227827]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:04 compute-0 systemd[1]: Started libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope.
Jan 26 16:32:04 compute-0 podman[227828]: 2026-01-26 16:32:04.80099014 +0000 UTC m=+0.124965659 container exec 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:32:04 compute-0 podman[227828]: 2026-01-26 16:32:04.833409941 +0000 UTC m=+0.157385430 container exec_died 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:32:04 compute-0 systemd[1]: libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope: Deactivated successfully.
Jan 26 16:32:04 compute-0 sudo[227825]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:05 compute-0 sudo[228009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxehqcotygbcgrjmqxhgyavoruzcxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445125.1038947-791-201762263316593/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:05 compute-0 sudo[228009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:05 compute-0 python3.9[228011]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:05 compute-0 systemd[1]: Started libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope.
Jan 26 16:32:05 compute-0 podman[228012]: 2026-01-26 16:32:05.872811882 +0000 UTC m=+0.099162507 container exec 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:32:05 compute-0 podman[228012]: 2026-01-26 16:32:05.907756993 +0000 UTC m=+0.134107588 container exec_died 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:32:05 compute-0 systemd[1]: libpod-conmon-25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64.scope: Deactivated successfully.
Jan 26 16:32:05 compute-0 sudo[228009]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:06 compute-0 sudo[228190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbgmjvpstxgixgapsbyuimsronjjwhpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445126.2202673-799-267117146607899/AnsiballZ_file.py'
Jan 26 16:32:06 compute-0 sudo[228190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:06 compute-0 python3.9[228192]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:06 compute-0 sudo[228190]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:07 compute-0 sudo[228342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpneprafratqrtahymxvayssmlcqxgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445127.1087837-808-101313301613966/AnsiballZ_podman_container_info.py'
Jan 26 16:32:07 compute-0 sudo[228342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:07 compute-0 python3.9[228344]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 26 16:32:07 compute-0 sudo[228342]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:08 compute-0 sudo[228506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icurdcmprnimvumxljfaqgaxvseogvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445127.978888-816-152977358097479/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:08 compute-0 sudo[228506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:08 compute-0 python3.9[228508]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:08 compute-0 systemd[1]: Started libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope.
Jan 26 16:32:08 compute-0 podman[228509]: 2026-01-26 16:32:08.775468995 +0000 UTC m=+0.113656002 container exec 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:32:08 compute-0 podman[228509]: 2026-01-26 16:32:08.809896291 +0000 UTC m=+0.148083258 container exec_died 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 16:32:08 compute-0 systemd[1]: libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope: Deactivated successfully.
Jan 26 16:32:08 compute-0 sudo[228506]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:09 compute-0 podman[228575]: 2026-01-26 16:32:09.244922029 +0000 UTC m=+0.118414290 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:32:09 compute-0 sudo[228710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubzwgxpbwuaksccdgmpjsaqecwutufo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445129.122613-824-152471759413589/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:09 compute-0 sudo[228710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:09 compute-0 python3.9[228712]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:09 compute-0 systemd[1]: Started libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope.
Jan 26 16:32:09 compute-0 podman[228713]: 2026-01-26 16:32:09.969528291 +0000 UTC m=+0.129851543 container exec 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:32:10 compute-0 podman[228713]: 2026-01-26 16:32:10.005248762 +0000 UTC m=+0.165572014 container exec_died 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Jan 26 16:32:10 compute-0 systemd[1]: libpod-conmon-2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069.scope: Deactivated successfully.
Jan 26 16:32:10 compute-0 sudo[228710]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:10 compute-0 sudo[228902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttghejuehdyjcshinwbgjsmgqeclvsoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445130.2870967-832-248169250328910/AnsiballZ_file.py'
Jan 26 16:32:10 compute-0 sudo[228902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:10 compute-0 podman[228866]: 2026-01-26 16:32:10.776464241 +0000 UTC m=+0.110743833 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:32:10 compute-0 python3.9[228905]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:10 compute-0 sudo[228902]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:11 compute-0 sudo[229061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmudpksyqrvkbgsyydyiapsolrdwxbid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445131.313868-841-212015798583148/AnsiballZ_podman_container_info.py'
Jan 26 16:32:11 compute-0 sudo[229061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:12 compute-0 python3.9[229063]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 26 16:32:12 compute-0 sudo[229061]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:13 compute-0 sudo[229233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlrqjzesenljpntvwrickpoacgihlat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445132.5182624-849-109860350963230/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:13 compute-0 sudo[229233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:13 compute-0 podman[229199]: 2026-01-26 16:32:13.123822225 +0000 UTC m=+0.145669552 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:32:13 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-216c0ba43bd15fde.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 16:32:13 compute-0 systemd[1]: 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990-216c0ba43bd15fde.service: Failed with result 'exit-code'.
Jan 26 16:32:13 compute-0 python3.9[229244]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:13 compute-0 systemd[1]: Started libpod-conmon-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope.
Jan 26 16:32:13 compute-0 podman[229247]: 2026-01-26 16:32:13.493780364 +0000 UTC m=+0.125978987 container exec 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:32:13 compute-0 podman[229247]: 2026-01-26 16:32:13.527234843 +0000 UTC m=+0.159433456 container exec_died 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 16:32:13 compute-0 sudo[229233]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:13 compute-0 systemd[1]: libpod-conmon-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope: Deactivated successfully.
Jan 26 16:32:14 compute-0 sudo[229441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxuguqafinztdoznjnkkywcfshyrbqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445133.8311112-857-219862226424516/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:14 compute-0 sudo[229441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:14 compute-0 podman[229401]: 2026-01-26 16:32:14.276472084 +0000 UTC m=+0.093753289 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible)
Jan 26 16:32:14 compute-0 python3.9[229446]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:14 compute-0 systemd[1]: Started libpod-conmon-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope.
Jan 26 16:32:14 compute-0 podman[229447]: 2026-01-26 16:32:14.6121109 +0000 UTC m=+0.116354594 container exec 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Jan 26 16:32:14 compute-0 podman[229447]: 2026-01-26 16:32:14.644235154 +0000 UTC m=+0.148478848 container exec_died 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 26 16:32:14 compute-0 sudo[229441]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:14 compute-0 systemd[1]: libpod-conmon-9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990.scope: Deactivated successfully.
Jan 26 16:32:15 compute-0 sudo[229641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrvbieogyrevchqjokgnpmrfvedcocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445134.9131024-865-77408865902663/AnsiballZ_file.py'
Jan 26 16:32:15 compute-0 sudo[229641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:15 compute-0 podman[229598]: 2026-01-26 16:32:15.526109501 +0000 UTC m=+0.173937480 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:32:15 compute-0 python3.9[229646]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:15 compute-0 sudo[229641]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:16 compute-0 sudo[229802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjzejaidrvekcmdcmyzeepxtfmdchgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445136.0258362-874-103761682306091/AnsiballZ_podman_container_info.py'
Jan 26 16:32:16 compute-0 sudo[229802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:16 compute-0 python3.9[229804]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 26 16:32:16 compute-0 sudo[229802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:17 compute-0 sudo[229966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozmmlmjetbtfvlfzaauicydzfjgynqlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445137.1640377-882-199389091342588/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:17 compute-0 sudo[229966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:17 compute-0 python3.9[229968]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:18 compute-0 systemd[1]: Started libpod-conmon-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope.
Jan 26 16:32:18 compute-0 podman[229969]: 2026-01-26 16:32:18.041845933 +0000 UTC m=+0.148119017 container exec d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler)
Jan 26 16:32:18 compute-0 podman[229969]: 2026-01-26 16:32:18.074252965 +0000 UTC m=+0.180526079 container exec_died d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, managed_by=edpm_ansible, name=ubi9, container_name=kepler)
Jan 26 16:32:18 compute-0 systemd[1]: libpod-conmon-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope: Deactivated successfully.
Jan 26 16:32:18 compute-0 sudo[229966]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:18 compute-0 sudo[230148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uflwlmrkkduvpxqcgtpjnidxhtxuleam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445138.3341458-890-253069589318472/AnsiballZ_podman_container_exec.py'
Jan 26 16:32:18 compute-0 sudo[230148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:19 compute-0 python3.9[230150]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 26 16:32:19 compute-0 systemd[1]: Started libpod-conmon-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope.
Jan 26 16:32:19 compute-0 podman[230151]: 2026-01-26 16:32:19.179507835 +0000 UTC m=+0.096283738 container exec d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4)
Jan 26 16:32:19 compute-0 podman[230151]: 2026-01-26 16:32:19.212591455 +0000 UTC m=+0.129367348 container exec_died d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, architecture=x86_64, io.openshift.expose-services=, name=ubi9, release=1214.1726694543)
Jan 26 16:32:19 compute-0 systemd[1]: libpod-conmon-d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285.scope: Deactivated successfully.
Jan 26 16:32:19 compute-0 sudo[230148]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:19 compute-0 sudo[230331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjywsrgeeibiddgzvaistcxmtlzwvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445139.5296943-898-83762804151566/AnsiballZ_file.py'
Jan 26 16:32:19 compute-0 sudo[230331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:20 compute-0 python3.9[230333]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:20 compute-0 sudo[230331]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:21 compute-0 sudo[230483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbijdqvpfoqedfehxpvqhdpvkoqoiqbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445140.5586083-907-1219699063829/AnsiballZ_file.py'
Jan 26 16:32:21 compute-0 sudo[230483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:21 compute-0 python3.9[230485]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:21 compute-0 sudo[230483]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:21 compute-0 sudo[230635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fswfkspiljhwqedikktewnwjwyjjnytm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445141.518071-915-147986269381224/AnsiballZ_stat.py'
Jan 26 16:32:22 compute-0 sudo[230635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:22 compute-0 podman[230637]: 2026-01-26 16:32:22.118799854 +0000 UTC m=+0.106110536 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:32:22 compute-0 python3.9[230638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:22 compute-0 sudo[230635]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:22 compute-0 sudo[230782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tebsbrqvshpyssuneffduexqdzrsfdbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445141.518071-915-147986269381224/AnsiballZ_copy.py'
Jan 26 16:32:22 compute-0 sudo[230782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:22 compute-0 python3.9[230784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769445141.518071-915-147986269381224/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:22 compute-0 sudo[230782]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:23 compute-0 sudo[230934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szaxenxaqbrrjukafojovummiijcoecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445143.356191-931-159120028407514/AnsiballZ_file.py'
Jan 26 16:32:23 compute-0 sudo[230934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:23 compute-0 python3.9[230936]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:24 compute-0 sudo[230934]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:32:24 compute-0 sudo[231086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eleopfapqbrjosfjoxguxyzhbsdjciod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445144.2640328-939-246297371894267/AnsiballZ_stat.py'
Jan 26 16:32:24 compute-0 sudo[231086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.752 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.754 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.754 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:32:24 compute-0 nova_compute[185389]: 2026-01-26 16:32:24.775 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:24 compute-0 python3.9[231088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:24 compute-0 sudo[231086]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:25 compute-0 sudo[231164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuzlzzckeclrclvtixfkeuqywvwmhcde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445144.2640328-939-246297371894267/AnsiballZ_file.py'
Jan 26 16:32:25 compute-0 sudo[231164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:25 compute-0 python3.9[231166]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:25 compute-0 sudo[231164]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:25 compute-0 nova_compute[185389]: 2026-01-26 16:32:25.790 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:26 compute-0 sudo[231316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjttgpedbjiivcjiegidewyshfrjzmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445145.7549362-951-32018296149050/AnsiballZ_stat.py'
Jan 26 16:32:26 compute-0 sudo[231316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:26 compute-0 python3.9[231318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:26 compute-0 sudo[231316]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:26 compute-0 nova_compute[185389]: 2026-01-26 16:32:26.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:26 compute-0 nova_compute[185389]: 2026-01-26 16:32:26.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:26 compute-0 sudo[231394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfdwdprksrjxobndzlwbrjxxatljriq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445145.7549362-951-32018296149050/AnsiballZ_file.py'
Jan 26 16:32:26 compute-0 sudo[231394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:26 compute-0 python3.9[231396]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fa6q3g8e recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:26 compute-0 sudo[231394]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:27 compute-0 sudo[231546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqmjuzmgnwmraipmvfyoqvnpdwbropn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445147.2009656-963-270581503934300/AnsiballZ_stat.py'
Jan 26 16:32:27 compute-0 sudo[231546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:27 compute-0 nova_compute[185389]: 2026-01-26 16:32:27.759 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:32:27 compute-0 python3.9[231548]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:28 compute-0 sudo[231546]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.140 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.141 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5668MB free_disk=72.47952270507812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.141 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.141 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.298 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.298 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:32:28 compute-0 sudo[231624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faomyqwwavzazeojjbkkvwkhfqikruzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445147.2009656-963-270581503934300/AnsiballZ_file.py'
Jan 26 16:32:28 compute-0 sudo[231624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.388 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.450 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.451 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.466 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.489 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.530 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.545 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.547 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:32:28 compute-0 nova_compute[185389]: 2026-01-26 16:32:28.548 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:28 compute-0 python3.9[231626]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:28 compute-0 sudo[231624]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:29 compute-0 sudo[231790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhzyydbwolfohrrtjydpdmgjljlapwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445148.8461905-976-214845525148375/AnsiballZ_command.py'
Jan 26 16:32:29 compute-0 sudo[231790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:29 compute-0 podman[231750]: 2026-01-26 16:32:29.367449221 +0000 UTC m=+0.116235921 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:32:29 compute-0 python3.9[231796]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.547 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.547 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:32:29 compute-0 sudo[231790]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:32:29 compute-0 nova_compute[185389]: 2026-01-26 16:32:29.740 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:32:29 compute-0 podman[201244]: time="2026-01-26T16:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:32:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 26 16:32:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3849 "" "Go-http-client/1.1"
Jan 26 16:32:30 compute-0 sudo[231948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hktumdixuwitgiozpammnjebyfyzdrdy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445149.794801-984-277316659380218/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 16:32:30 compute-0 sudo[231948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:30 compute-0 python3[231950]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 16:32:30 compute-0 sudo[231948]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.328 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:32:31.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:32:31 compute-0 openstack_network_exporter[204387]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:32:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:32:31 compute-0 openstack_network_exporter[204387]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:32:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:32:31 compute-0 sudo[232101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anqxajpbfklzwcsuwxqnsunpdvddjyaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445150.9894023-992-137608289538568/AnsiballZ_stat.py'
Jan 26 16:32:31 compute-0 sudo[232101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:31 compute-0 nova_compute[185389]: 2026-01-26 16:32:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:31 compute-0 python3.9[232103]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:31 compute-0 sudo[232101]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:32 compute-0 sudo[232179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znotaddugnefzhkztmpurnzpbiassqia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445150.9894023-992-137608289538568/AnsiballZ_file.py'
Jan 26 16:32:32 compute-0 sudo[232179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:32 compute-0 python3.9[232181]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:32 compute-0 sudo[232179]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:33 compute-0 podman[232305]: 2026-01-26 16:32:33.181867224 +0000 UTC m=+0.078997739 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 16:32:33 compute-0 sudo[232349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuxpuggdiudwcxkjpmkmlgrpsolnmsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445152.7108521-1004-263381360980855/AnsiballZ_stat.py'
Jan 26 16:32:33 compute-0 sudo[232349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:33 compute-0 python3.9[232353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:33 compute-0 sudo[232349]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:33 compute-0 sudo[232429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnxwxrpfzupkacidkyfwwjgfqpnhbrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445152.7108521-1004-263381360980855/AnsiballZ_file.py'
Jan 26 16:32:33 compute-0 sudo[232429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:34 compute-0 python3.9[232431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:34 compute-0 sudo[232429]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:34 compute-0 sudo[232581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbsjwlotezrndgqvzqftqicbfbprriv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445154.3203719-1016-7418185236353/AnsiballZ_stat.py'
Jan 26 16:32:34 compute-0 sudo[232581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:34 compute-0 python3.9[232583]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:34 compute-0 sudo[232581]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:35 compute-0 sudo[232659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwiftffojpcvoqzqevqevhgpzgxvggbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445154.3203719-1016-7418185236353/AnsiballZ_file.py'
Jan 26 16:32:35 compute-0 sudo[232659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:35 compute-0 python3.9[232661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:35 compute-0 sudo[232659]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:36 compute-0 sudo[232811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twdevebyytvtigncyarxnaevkntbqfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445155.7865272-1028-89874442611290/AnsiballZ_stat.py'
Jan 26 16:32:36 compute-0 sudo[232811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:36 compute-0 python3.9[232813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:36 compute-0 sudo[232811]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:37 compute-0 sudo[232889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvnujvgwgrcivexpwqvqteyitjucrbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445155.7865272-1028-89874442611290/AnsiballZ_file.py'
Jan 26 16:32:37 compute-0 sudo[232889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:37 compute-0 python3.9[232891]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:37 compute-0 sudo[232889]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:38 compute-0 sudo[233041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoiiycqfeeuyjoxwrivizacqzrpyufgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445157.5222213-1040-273348727540200/AnsiballZ_stat.py'
Jan 26 16:32:38 compute-0 sudo[233041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:38 compute-0 python3.9[233043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:32:38 compute-0 sudo[233041]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:38 compute-0 sudo[233166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imrhugwwnjtgjamyxptdqslbidkwzglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445157.5222213-1040-273348727540200/AnsiballZ_copy.py'
Jan 26 16:32:38 compute-0 sudo[233166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:39 compute-0 python3.9[233168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769445157.5222213-1040-273348727540200/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:39 compute-0 sudo[233166]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:39 compute-0 sudo[233333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvslahmqodoqxpxfvhfdhzvprhfjifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445159.2764978-1055-141101986133252/AnsiballZ_file.py'
Jan 26 16:32:39 compute-0 sudo[233333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:39 compute-0 podman[233292]: 2026-01-26 16:32:39.748907786 +0000 UTC m=+0.079049506 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:32:39 compute-0 python3.9[233342]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:39 compute-0 sudo[233333]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:40 compute-0 sudo[233492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqjohsxvrlupnhnaozllpcrhmgxwdvft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445160.2164462-1063-192975250993379/AnsiballZ_command.py'
Jan 26 16:32:40 compute-0 sudo[233492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:40 compute-0 python3.9[233494]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:32:40 compute-0 sudo[233492]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:41 compute-0 podman[233541]: 2026-01-26 16:32:41.238500431 +0000 UTC m=+0.116322398 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:32:41 compute-0 sudo[233664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kctueonkyuusbmudgggupzkobslbnyxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445161.1151307-1071-214853512227258/AnsiballZ_blockinfile.py'
Jan 26 16:32:41 compute-0 sudo[233664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:42 compute-0 python3.9[233666]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:42 compute-0 sudo[233664]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:42 compute-0 sudo[233816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvybdfvkkthbigoljmybaslfunhwicmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445162.4179952-1080-203322301659796/AnsiballZ_command.py'
Jan 26 16:32:42 compute-0 sudo[233816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:43 compute-0 python3.9[233818]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:32:43 compute-0 sudo[233816]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:43 compute-0 sudo[233982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdjoezsmkzaywgymrvnimyspujjddsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445163.4518964-1088-107345677356232/AnsiballZ_stat.py'
Jan 26 16:32:43 compute-0 sudo[233982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:43 compute-0 podman[233943]: 2026-01-26 16:32:43.975247301 +0000 UTC m=+0.127329457 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Jan 26 16:32:44 compute-0 python3.9[233989]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 16:32:44 compute-0 sudo[233982]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:44 compute-0 podman[234075]: 2026-01-26 16:32:44.815647417 +0000 UTC m=+0.121478718 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release-0.7.12=, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=kepler, vendor=Red Hat, Inc.)
Jan 26 16:32:44 compute-0 sudo[234160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyjzgjtlejfieuzjfhczmyqudrtqxoai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445164.4937937-1096-270447503784637/AnsiballZ_command.py'
Jan 26 16:32:44 compute-0 sudo[234160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:45 compute-0 python3.9[234162]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:32:45 compute-0 sudo[234160]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:46 compute-0 sudo[234331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igabypiwrsyjjspsripjfpcsbyglqoca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445165.59102-1104-219496500843298/AnsiballZ_file.py'
Jan 26 16:32:46 compute-0 sudo[234331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:46 compute-0 podman[234289]: 2026-01-26 16:32:46.148847418 +0000 UTC m=+0.174250511 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:32:46 compute-0 python3.9[234337]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:32:46 compute-0 sudo[234331]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:46 compute-0 sshd-session[213230]: Connection closed by 192.168.122.30 port 35206
Jan 26 16:32:46 compute-0 sshd-session[213227]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:32:46 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 26 16:32:46 compute-0 systemd[1]: session-27.scope: Consumed 1min 41.896s CPU time.
Jan 26 16:32:46 compute-0 systemd-logind[788]: Session 27 logged out. Waiting for processes to exit.
Jan 26 16:32:46 compute-0 systemd-logind[788]: Removed session 27.
Jan 26 16:32:51 compute-0 sshd-session[234370]: Accepted publickey for zuul from 192.168.122.30 port 37062 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 16:32:51 compute-0 systemd-logind[788]: New session 28 of user zuul.
Jan 26 16:32:51 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 26 16:32:51 compute-0 sshd-session[234370]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:32:52 compute-0 podman[234497]: 2026-01-26 16:32:52.940478988 +0000 UTC m=+0.134220214 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:32:53 compute-0 python3.9[234534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:32:54 compute-0 sudo[234702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crcejvjhjvziylgddggcfzgxtgqablhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445173.7819083-29-200488079470688/AnsiballZ_systemd.py'
Jan 26 16:32:54 compute-0 sudo[234702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:54 compute-0 python3.9[234704]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 26 16:32:54 compute-0 sudo[234702]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:55 compute-0 sudo[234855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjgkthwsxetsftrvwzpsawzkwrypmzxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445175.1276646-37-258400766252669/AnsiballZ_setup.py'
Jan 26 16:32:55 compute-0 sudo[234855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:55 compute-0 python3.9[234857]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 16:32:56 compute-0 sudo[234855]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:56 compute-0 sudo[234939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwckedofwafnqmpmozdigtejhiadgvse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445175.1276646-37-258400766252669/AnsiballZ_dnf.py'
Jan 26 16:32:56 compute-0 sudo[234939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:32:57 compute-0 python3.9[234941]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 16:32:59 compute-0 podman[201244]: time="2026-01-26T16:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:32:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:32:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Jan 26 16:33:00 compute-0 podman[234948]: 2026-01-26 16:33:00.201195439 +0000 UTC m=+0.087483225 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=)
Jan 26 16:33:00 compute-0 sudo[234939]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:01 compute-0 sudo[235115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuemerboheiitrngxeuwprtnehzhdnex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445180.6084597-49-75193686085132/AnsiballZ_stat.py'
Jan 26 16:33:01 compute-0 sudo[235115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:01 compute-0 openstack_network_exporter[204387]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:33:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:33:01 compute-0 openstack_network_exporter[204387]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:33:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:33:01 compute-0 python3.9[235117]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:33:01 compute-0 sudo[235115]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:33:01.707 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:33:01.708 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:33:01.708 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:02 compute-0 sudo[235238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmmestucpvouthnopwhxxcsekxqhkecn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445180.6084597-49-75193686085132/AnsiballZ_copy.py'
Jan 26 16:33:02 compute-0 sudo[235238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:02 compute-0 python3.9[235240]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769445180.6084597-49-75193686085132/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:33:02 compute-0 sudo[235238]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:03 compute-0 podman[235364]: 2026-01-26 16:33:03.672564614 +0000 UTC m=+0.099742878 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute)
Jan 26 16:33:03 compute-0 sudo[235407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfipwdvxhzxarjlsfqfevkfipouliboa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445182.6046598-64-232301552133844/AnsiballZ_file.py'
Jan 26 16:33:03 compute-0 sudo[235407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:03 compute-0 python3.9[235411]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:33:03 compute-0 sudo[235407]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:04 compute-0 sudo[235561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiupzzxxlswykmrsufnhyxqguszefvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445184.1986601-72-196381521271997/AnsiballZ_stat.py'
Jan 26 16:33:04 compute-0 sudo[235561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:04 compute-0 python3.9[235563]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 16:33:04 compute-0 sudo[235561]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:05 compute-0 sudo[235684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbgfkpxjgipwtjbxizpduslytvrrwur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445184.1986601-72-196381521271997/AnsiballZ_copy.py'
Jan 26 16:33:05 compute-0 sudo[235684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:05 compute-0 python3.9[235686]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769445184.1986601-72-196381521271997/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 16:33:05 compute-0 sudo[235684]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:06 compute-0 sudo[235836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqjcrihmectsihlsfittmvhgeokrcihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769445185.7836533-87-82418983664021/AnsiballZ_systemd.py'
Jan 26 16:33:06 compute-0 sudo[235836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:06 compute-0 python3.9[235838]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 16:33:06 compute-0 systemd[1]: Stopping System Logging Service...
Jan 26 16:33:06 compute-0 rsyslogd[1006]: imjournal: 2664 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 16:33:06 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 26 16:33:06 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 26 16:33:06 compute-0 systemd[1]: Stopped System Logging Service.
Jan 26 16:33:06 compute-0 systemd[1]: rsyslog.service: Consumed 3.879s CPU time, 9.9M memory peak, read 0B from disk, written 5.6M to disk.
Jan 26 16:33:06 compute-0 systemd[1]: Starting System Logging Service...
Jan 26 16:33:06 compute-0 rsyslogd[235842]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235842" x-info="https://www.rsyslog.com"] start
Jan 26 16:33:06 compute-0 systemd[1]: Started System Logging Service.
Jan 26 16:33:06 compute-0 rsyslogd[235842]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:33:06 compute-0 rsyslogd[235842]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 26 16:33:06 compute-0 rsyslogd[235842]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 26 16:33:06 compute-0 rsyslogd[235842]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 26 16:33:07 compute-0 sudo[235836]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:07 compute-0 rsyslogd[235842]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 26 16:33:07 compute-0 sshd-session[234373]: Connection closed by 192.168.122.30 port 37062
Jan 26 16:33:07 compute-0 sshd-session[234370]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:33:07 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 26 16:33:07 compute-0 systemd[1]: session-28.scope: Consumed 11.205s CPU time.
Jan 26 16:33:07 compute-0 systemd-logind[788]: Session 28 logged out. Waiting for processes to exit.
Jan 26 16:33:07 compute-0 systemd-logind[788]: Removed session 28.
Jan 26 16:33:10 compute-0 podman[235871]: 2026-01-26 16:33:10.183492148 +0000 UTC m=+0.074439840 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:33:12 compute-0 podman[235895]: 2026-01-26 16:33:12.262175699 +0000 UTC m=+0.141019718 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:33:14 compute-0 podman[235913]: 2026-01-26 16:33:14.19973815 +0000 UTC m=+0.082062567 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:33:15 compute-0 podman[235932]: 2026-01-26 16:33:15.208350052 +0000 UTC m=+0.092505491 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9)
Jan 26 16:33:17 compute-0 podman[235953]: 2026-01-26 16:33:17.261332186 +0000 UTC m=+0.150472515 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 26 16:33:23 compute-0 podman[235980]: 2026-01-26 16:33:23.237912538 +0000 UTC m=+0.127136441 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:33:27 compute-0 nova_compute[185389]: 2026-01-26 16:33:27.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:27 compute-0 nova_compute[185389]: 2026-01-26 16:33:27.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:28 compute-0 nova_compute[185389]: 2026-01-26 16:33:28.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:29 compute-0 podman[201244]: time="2026-01-26T16:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:33:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:29 compute-0 nova_compute[185389]: 2026-01-26 16:33:29.757 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:33:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.117 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.119 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5716MB free_disk=72.4771728515625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.120 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.120 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.426 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.426 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.453 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.470 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.472 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:33:30 compute-0 nova_compute[185389]: 2026-01-26 16:33:30.472 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:31 compute-0 podman[236003]: 2026-01-26 16:33:31.221332441 +0000 UTC m=+0.112077063 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Jan 26 16:33:31 compute-0 openstack_network_exporter[204387]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:33:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:33:31 compute-0 openstack_network_exporter[204387]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:33:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.468 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.743 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.744 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.745 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:33:31 compute-0 nova_compute[185389]: 2026-01-26 16:33:31.785 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:33:32 compute-0 nova_compute[185389]: 2026-01-26 16:33:32.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:34 compute-0 podman[236024]: 2026-01-26 16:33:34.194318171 +0000 UTC m=+0.088704737 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible)
Jan 26 16:33:36 compute-0 sshd-session[236042]: Accepted publickey for zuul from 38.102.83.145 port 44546 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 16:33:36 compute-0 systemd-logind[788]: New session 29 of user zuul.
Jan 26 16:33:36 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 26 16:33:36 compute-0 sshd-session[236042]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 16:33:37 compute-0 python3[236219]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:33:39 compute-0 sudo[236440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxpvxwsxirbyhtfwwhgpfkmtygpyrhmd ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445219.0700438-37033-19348246350300/AnsiballZ_command.py'
Jan 26 16:33:39 compute-0 sudo[236440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:39 compute-0 python3[236442]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:33:40 compute-0 sudo[236440]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:40 compute-0 sudo[236607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynixeusrljletgcylxaouhueixwuagdv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445220.3060129-37044-73736587348270/AnsiballZ_command.py'
Jan 26 16:33:40 compute-0 sudo[236607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:40 compute-0 podman[236568]: 2026-01-26 16:33:40.713024615 +0000 UTC m=+0.083113186 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:33:40 compute-0 python3[236618]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:33:42 compute-0 sudo[236607]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:43 compute-0 podman[236645]: 2026-01-26 16:33:43.266656275 +0000 UTC m=+0.139570248 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 16:33:43 compute-0 python3[236787]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 16:33:44 compute-0 podman[236858]: 2026-01-26 16:33:44.801457392 +0000 UTC m=+0.116534671 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 26 16:33:45 compute-0 sudo[236957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexofvwjtdvjyyijnwrxrlffklhvimpx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445224.5616982-37090-166453180509134/AnsiballZ_setup.py'
Jan 26 16:33:45 compute-0 sudo[236957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:45 compute-0 python3[236959]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 16:33:46 compute-0 podman[237030]: 2026-01-26 16:33:46.265144805 +0000 UTC m=+0.138590770 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:33:46 compute-0 sudo[236957]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:47 compute-0 sudo[237215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysnqpldsjzrhtputqkycixjvnndqiqcu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445227.2148235-37121-221973277724800/AnsiballZ_command.py'
Jan 26 16:33:47 compute-0 sudo[237215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:47 compute-0 podman[237176]: 2026-01-26 16:33:47.650645322 +0000 UTC m=+0.112469670 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:33:47 compute-0 python3[237221]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:33:47 compute-0 sudo[237215]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:48 compute-0 sudo[237389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fonsbfksggrmxelrcntomjdtksvfafcw ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769445228.2311504-37138-243281214621278/AnsiballZ_command.py'
Jan 26 16:33:48 compute-0 sudo[237389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 16:33:48 compute-0 python3[237391]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 16:33:48 compute-0 sudo[237389]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:54 compute-0 podman[237430]: 2026-01-26 16:33:54.242228549 +0000 UTC m=+0.127159811 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:33:59 compute-0 podman[201244]: time="2026-01-26T16:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:33:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:33:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3847 "" "Go-http-client/1.1"
Jan 26 16:34:01 compute-0 openstack_network_exporter[204387]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:34:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:34:01 compute-0 openstack_network_exporter[204387]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:34:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:34:01.709 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:34:01.709 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:34:01.710 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:02 compute-0 podman[237454]: 2026-01-26 16:34:02.191730031 +0000 UTC m=+0.067959079 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:34:05 compute-0 podman[237473]: 2026-01-26 16:34:05.224167696 +0000 UTC m=+0.110681732 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0)
Jan 26 16:34:11 compute-0 podman[237492]: 2026-01-26 16:34:11.23630254 +0000 UTC m=+0.118231206 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:34:14 compute-0 podman[237513]: 2026-01-26 16:34:14.185214454 +0000 UTC m=+0.074570950 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 26 16:34:15 compute-0 podman[237531]: 2026-01-26 16:34:15.267829501 +0000 UTC m=+0.141255843 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 16:34:17 compute-0 podman[237551]: 2026-01-26 16:34:17.23281357 +0000 UTC m=+0.104555945 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=)
Jan 26 16:34:18 compute-0 podman[237570]: 2026-01-26 16:34:18.265616803 +0000 UTC m=+0.153837576 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:34:25 compute-0 podman[237595]: 2026-01-26 16:34:25.187315107 +0000 UTC m=+0.070688233 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:34:29 compute-0 nova_compute[185389]: 2026-01-26 16:34:29.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:29 compute-0 nova_compute[185389]: 2026-01-26 16:34:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:29 compute-0 nova_compute[185389]: 2026-01-26 16:34:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:29 compute-0 nova_compute[185389]: 2026-01-26 16:34:29.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:34:29 compute-0 nova_compute[185389]: 2026-01-26 16:34:29.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:29 compute-0 podman[201244]: time="2026-01-26T16:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:34:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:34:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.382 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.383 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.383 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.383 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.789 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.791 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5725MB free_disk=72.47664642333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.791 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:30 compute-0 nova_compute[185389]: 2026-01-26 16:34:30.792 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.329 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'memory.usage': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'memory.usage': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'memory.usage': [], 'network.outgoing.packets.drop': [], 'network.incoming.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'memory.usage': [], 'network.outgoing.packets.drop': [], 'network.incoming.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:34:31.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:34:31 compute-0 openstack_network_exporter[204387]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:34:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:34:31 compute-0 openstack_network_exporter[204387]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:34:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:34:31 compute-0 nova_compute[185389]: 2026-01-26 16:34:31.960 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:34:31 compute-0 nova_compute[185389]: 2026-01-26 16:34:31.961 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:34:31 compute-0 nova_compute[185389]: 2026-01-26 16:34:31.996 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:34:32 compute-0 nova_compute[185389]: 2026-01-26 16:34:32.399 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:34:32 compute-0 nova_compute[185389]: 2026-01-26 16:34:32.402 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:34:32 compute-0 nova_compute[185389]: 2026-01-26 16:34:32.402 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:33 compute-0 podman[237620]: 2026-01-26 16:34:33.209377394 +0000 UTC m=+0.080639214 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=)
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.398 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.399 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.399 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.399 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.461 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.462 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.462 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:33 compute-0 nova_compute[185389]: 2026-01-26 16:34:33.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:36 compute-0 podman[237640]: 2026-01-26 16:34:36.179284488 +0000 UTC m=+0.069030459 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:34:42 compute-0 podman[237659]: 2026-01-26 16:34:42.215532539 +0000 UTC m=+0.097028340 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:34:44 compute-0 podman[237682]: 2026-01-26 16:34:44.832519203 +0000 UTC m=+0.121162037 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:34:46 compute-0 podman[237701]: 2026-01-26 16:34:46.190707367 +0000 UTC m=+0.075149375 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:34:48 compute-0 podman[237720]: 2026-01-26 16:34:48.237900546 +0000 UTC m=+0.113482898 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Jan 26 16:34:49 compute-0 sshd-session[236045]: Received disconnect from 38.102.83.145 port 44546:11: disconnected by user
Jan 26 16:34:49 compute-0 sshd-session[236045]: Disconnected from user zuul 38.102.83.145 port 44546
Jan 26 16:34:49 compute-0 sshd-session[236042]: pam_unix(sshd:session): session closed for user zuul
Jan 26 16:34:49 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 26 16:34:49 compute-0 systemd[1]: session-29.scope: Consumed 10.385s CPU time.
Jan 26 16:34:49 compute-0 systemd-logind[788]: Session 29 logged out. Waiting for processes to exit.
Jan 26 16:34:49 compute-0 systemd-logind[788]: Removed session 29.
Jan 26 16:34:49 compute-0 podman[237739]: 2026-01-26 16:34:49.278475107 +0000 UTC m=+0.161560033 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 16:34:56 compute-0 podman[237767]: 2026-01-26 16:34:56.233413183 +0000 UTC m=+0.106136715 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:34:59 compute-0 podman[201244]: time="2026-01-26T16:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:34:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:34:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Jan 26 16:35:01 compute-0 openstack_network_exporter[204387]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:35:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:35:01 compute-0 openstack_network_exporter[204387]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:35:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:35:01.710 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:35:01.711 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:35:01.711 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:04 compute-0 podman[237789]: 2026-01-26 16:35:04.240511109 +0000 UTC m=+0.122090455 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 26 16:35:07 compute-0 podman[237810]: 2026-01-26 16:35:07.256577129 +0000 UTC m=+0.130015153 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:35:13 compute-0 podman[237829]: 2026-01-26 16:35:13.214712583 +0000 UTC m=+0.088435187 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:35:15 compute-0 podman[237852]: 2026-01-26 16:35:15.249353302 +0000 UTC m=+0.121423016 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:35:17 compute-0 podman[237871]: 2026-01-26 16:35:17.220621377 +0000 UTC m=+0.106114815 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:35:19 compute-0 podman[237891]: 2026-01-26 16:35:19.234529165 +0000 UTC m=+0.122313881 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9)
Jan 26 16:35:20 compute-0 podman[237911]: 2026-01-26 16:35:20.292166566 +0000 UTC m=+0.175777424 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 16:35:27 compute-0 podman[237936]: 2026-01-26 16:35:27.195525673 +0000 UTC m=+0.083527231 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:35:29 compute-0 nova_compute[185389]: 2026-01-26 16:35:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:29 compute-0 podman[201244]: time="2026-01-26T16:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:35:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:35:29 compute-0 nova_compute[185389]: 2026-01-26 16:35:29.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:29 compute-0 nova_compute[185389]: 2026-01-26 16:35:29.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:29 compute-0 nova_compute[185389]: 2026-01-26 16:35:29.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:29 compute-0 nova_compute[185389]: 2026-01-26 16:35:29.758 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:35:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3859 "" "Go-http-client/1.1"
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.102 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.103 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5721MB free_disk=72.47666549682617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.103 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.103 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.177 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.178 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.203 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.222 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.223 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:35:30 compute-0 nova_compute[185389]: 2026-01-26 16:35:30.224 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.224 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.225 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.225 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 openstack_network_exporter[204387]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:35:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:35:31 compute-0 openstack_network_exporter[204387]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:35:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.739 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.741 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:31 compute-0 nova_compute[185389]: 2026-01-26 16:35:31.741 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:35:32 compute-0 nova_compute[185389]: 2026-01-26 16:35:32.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:33 compute-0 nova_compute[185389]: 2026-01-26 16:35:33.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:34 compute-0 nova_compute[185389]: 2026-01-26 16:35:34.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:35 compute-0 podman[237958]: 2026-01-26 16:35:35.185099774 +0000 UTC m=+0.076584380 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:35:38 compute-0 podman[237978]: 2026-01-26 16:35:38.228548128 +0000 UTC m=+0.112306595 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:35:44 compute-0 podman[237995]: 2026-01-26 16:35:44.17542353 +0000 UTC m=+0.063572273 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:35:46 compute-0 podman[238018]: 2026-01-26 16:35:46.233159997 +0000 UTC m=+0.105034046 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:35:48 compute-0 podman[238036]: 2026-01-26 16:35:48.265330138 +0000 UTC m=+0.136153723 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:35:50 compute-0 podman[238057]: 2026-01-26 16:35:50.213934598 +0000 UTC m=+0.100509481 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, build-date=2024-09-18T21:23:30)
Jan 26 16:35:51 compute-0 podman[238076]: 2026-01-26 16:35:51.278633363 +0000 UTC m=+0.162909339 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:35:58 compute-0 podman[238102]: 2026-01-26 16:35:58.181994681 +0000 UTC m=+0.066371753 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:35:59 compute-0 podman[201244]: time="2026-01-26T16:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:35:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:35:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 26 16:36:01 compute-0 openstack_network_exporter[204387]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:36:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:36:01 compute-0 openstack_network_exporter[204387]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:36:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:01.712 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:01.712 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:01.712 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:03.173 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:36:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:03.173 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:36:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:36:03.174 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:36:06 compute-0 podman[238126]: 2026-01-26 16:36:06.240564497 +0000 UTC m=+0.117866321 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6)
Jan 26 16:36:09 compute-0 podman[238146]: 2026-01-26 16:36:09.271665717 +0000 UTC m=+0.144359346 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 26 16:36:14 compute-0 podman[238164]: 2026-01-26 16:36:14.785740124 +0000 UTC m=+0.084170276 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:36:17 compute-0 podman[238188]: 2026-01-26 16:36:17.22157555 +0000 UTC m=+0.101297661 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 16:36:19 compute-0 podman[238206]: 2026-01-26 16:36:19.224349313 +0000 UTC m=+0.110215889 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 26 16:36:21 compute-0 podman[238225]: 2026-01-26 16:36:21.212411836 +0000 UTC m=+0.092798445 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:36:22 compute-0 podman[238244]: 2026-01-26 16:36:22.2474165 +0000 UTC m=+0.122823248 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:36:29 compute-0 podman[238271]: 2026-01-26 16:36:29.266062234 +0000 UTC m=+0.128362113 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:36:29 compute-0 podman[201244]: time="2026-01-26T16:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:36:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:36:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3860 "" "Go-http-client/1.1"
Jan 26 16:36:30 compute-0 nova_compute[185389]: 2026-01-26 16:36:30.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:30 compute-0 nova_compute[185389]: 2026-01-26 16:36:30.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.330 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.331 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce6ae600>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:36:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:36:31 compute-0 openstack_network_exporter[204387]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:36:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:36:31 compute-0 openstack_network_exporter[204387]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:36:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.755 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:31 compute-0 nova_compute[185389]: 2026-01-26 16:36:31.759 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:36:32 compute-0 nova_compute[185389]: 2026-01-26 16:36:32.244 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:36:32 compute-0 nova_compute[185389]: 2026-01-26 16:36:32.245 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5718MB free_disk=72.47664642333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:36:32 compute-0 nova_compute[185389]: 2026-01-26 16:36:32.246 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:32 compute-0 nova_compute[185389]: 2026-01-26 16:36:32.246 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.288 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.289 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.317 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.331 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.332 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:36:34 compute-0 nova_compute[185389]: 2026-01-26 16:36:34.333 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.332 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.333 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.333 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.334 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.349 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.350 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:35 compute-0 nova_compute[185389]: 2026-01-26 16:36:35.350 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:36 compute-0 nova_compute[185389]: 2026-01-26 16:36:36.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:37 compute-0 podman[238296]: 2026-01-26 16:36:37.215081029 +0000 UTC m=+0.095366477 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git)
Jan 26 16:36:40 compute-0 podman[238316]: 2026-01-26 16:36:40.18490561 +0000 UTC m=+0.077855991 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:36:45 compute-0 podman[238336]: 2026-01-26 16:36:45.216729036 +0000 UTC m=+0.106313380 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:36:48 compute-0 podman[238360]: 2026-01-26 16:36:48.208134746 +0000 UTC m=+0.095298575 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:36:50 compute-0 podman[238380]: 2026-01-26 16:36:50.199501451 +0000 UTC m=+0.087414956 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:36:52 compute-0 podman[238400]: 2026-01-26 16:36:52.204413663 +0000 UTC m=+0.091787988 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:36:53 compute-0 podman[238420]: 2026-01-26 16:36:53.26044127 +0000 UTC m=+0.123314382 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 16:36:59 compute-0 podman[201244]: time="2026-01-26T16:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:36:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:36:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3862 "" "Go-http-client/1.1"
Jan 26 16:37:00 compute-0 podman[238446]: 2026-01-26 16:37:00.19425925 +0000 UTC m=+0.086442289 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:37:01 compute-0 openstack_network_exporter[204387]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:37:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:37:01 compute-0 openstack_network_exporter[204387]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:37:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:01.713 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:01.715 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:01.715 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:03.518 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:37:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:03.521 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:37:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:07.525 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:08 compute-0 podman[238469]: 2026-01-26 16:37:08.208007422 +0000 UTC m=+0.095290244 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter)
Jan 26 16:37:11 compute-0 podman[238489]: 2026-01-26 16:37:11.332356459 +0000 UTC m=+0.173165315 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:37:16 compute-0 podman[238508]: 2026-01-26 16:37:16.269501289 +0000 UTC m=+0.158733135 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:37:19 compute-0 podman[238531]: 2026-01-26 16:37:19.239091263 +0000 UTC m=+0.124458335 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:37:21 compute-0 podman[238550]: 2026-01-26 16:37:21.18906266 +0000 UTC m=+0.076380882 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.426 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.427 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.568 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.755 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.755 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.768 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.768 185393 INFO nova.compute.claims [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:37:21 compute-0 nova_compute[185389]: 2026-01-26 16:37:21.985 185393 DEBUG nova.compute.provider_tree [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.004 185393 DEBUG nova.scheduler.client.report [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.073 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.075 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.203 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.204 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.236 185393 INFO nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:37:22 compute-0 nova_compute[185389]: 2026-01-26 16:37:22.528 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.034 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.036 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.037 185393 INFO nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Creating image(s)
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.038 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.038 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.039 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.039 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.040 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.053 185393 WARNING oslo_policy.policy [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 16:37:23 compute-0 nova_compute[185389]: 2026-01-26 16:37:23.053 185393 WARNING oslo_policy.policy [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 16:37:23 compute-0 podman[238567]: 2026-01-26 16:37:23.206907741 +0000 UTC m=+0.088452545 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, io.openshift.expose-services=, version=9.4, container_name=kepler, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64)
Jan 26 16:37:24 compute-0 podman[238585]: 2026-01-26 16:37:24.235662971 +0000 UTC m=+0.116995997 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.347 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.418 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.part --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.419 185393 DEBUG nova.virt.images [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] 718285d9-0264-40f4-9fb3-d2faff180284 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.420 185393 DEBUG nova.privsep.utils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.421 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.part /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.435 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Successfully created port: 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.618 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.part /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.converted" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.623 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.682 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.683 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:24 compute-0 nova_compute[185389]: 2026-01-26 16:37:24.695 185393 INFO oslo.privsep.daemon [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpa0jr1mv8/privsep.sock']
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.419 185393 INFO oslo.privsep.daemon [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Spawned new privsep daemon via rootwrap
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.272 238630 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.283 238630 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.288 238630 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.288 238630 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238630
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.501 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.560 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.562 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.564 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.583 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.658 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.659 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.699 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.700 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.701 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.754 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.755 185393 DEBUG nova.virt.disk.api [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.755 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.810 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.811 185393 DEBUG nova.virt.disk.api [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.812 185393 DEBUG nova.objects.instance [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.836 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.837 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.838 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.838 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.839 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.840 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.881 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.882 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.944 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.946 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:25 compute-0 nova_compute[185389]: 2026-01-26 16:37:25.975 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.071 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.073 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.075 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.104 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.212 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.214 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.290 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 1073741824" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.292 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.293 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.378 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.380 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.381 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Ensure instance console log exists: /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.382 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.383 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.385 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.949 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Successfully updated port: 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.968 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.969 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:37:26 compute-0 nova_compute[185389]: 2026-01-26 16:37:26.970 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:37:27 compute-0 nova_compute[185389]: 2026-01-26 16:37:27.180 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.221 185393 DEBUG nova.compute.manager [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-changed-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.222 185393 DEBUG nova.compute.manager [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Refreshing instance network info cache due to event network-changed-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.222 185393 DEBUG oslo_concurrency.lockutils [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.614 185393 DEBUG nova.network.neutron [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.649 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.649 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Instance network_info: |[{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.650 185393 DEBUG oslo_concurrency.lockutils [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.650 185393 DEBUG nova.network.neutron [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Refreshing network info cache for port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.655 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Start _get_guest_xml network_info=[{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '718285d9-0264-40f4-9fb3-d2faff180284'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.667 185393 WARNING nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.680 185393 DEBUG nova.virt.libvirt.host [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.681 185393 DEBUG nova.virt.libvirt.host [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.689 185393 DEBUG nova.virt.libvirt.host [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.690 185393 DEBUG nova.virt.libvirt.host [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.691 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.691 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T16:35:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c2a8df4d-a1d7-42a3-8279-8c7de8a1a662',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.692 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.692 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.693 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.693 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.694 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.694 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.695 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.695 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.695 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.696 185393 DEBUG nova.virt.hardware [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.701 185393 DEBUG nova.privsep.utils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.703 185393 DEBUG nova.virt.libvirt.vif [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-w38kzri4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:37:22Z,user_data=None,user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=60ba224f-9c5d-4eb4-b501-66d7339832b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.703 185393 DEBUG nova.network.os_vif_util [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.704 185393 DEBUG nova.network.os_vif_util [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.706 185393 DEBUG nova.objects.instance [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.730 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <uuid>60ba224f-9c5d-4eb4-b501-66d7339832b9</uuid>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <name>instance-00000001</name>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <metadata>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:name>test_0</nova:name>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 16:37:28</nova:creationTime>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:flavor name="m1.small">
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="718285d9-0264-40f4-9fb3-d2faff180284"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         <nova:port uuid="0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f">
Jan 26 16:37:28 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="192.168.0.57" ipVersion="4"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </metadata>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <system>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="serial">60ba224f-9c5d-4eb4-b501-66d7339832b9</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="uuid">60ba224f-9c5d-4eb4-b501-66d7339832b9</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </system>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <os>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </os>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <features>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <apic/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </features>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </clock>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.config"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:b0:51:31"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <target dev="tap0f88f3ae-fb"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/console.log" append="off"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </serial>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <video>
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </video>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 16:37:28 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 16:37:28 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 16:37:28 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:37:28 compute-0 nova_compute[185389]: </domain>
Jan 26 16:37:28 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.732 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Preparing to wait for external event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.732 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.733 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.733 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.734 185393 DEBUG nova.virt.libvirt.vif [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-w38kzri4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:37:22Z,user_data=None,user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=60ba224f-9c5d-4eb4-b501-66d7339832b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.734 185393 DEBUG nova.network.os_vif_util [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.735 185393 DEBUG nova.network.os_vif_util [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.736 185393 DEBUG os_vif [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.783 185393 DEBUG ovsdbapp.backend.ovs_idl [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.784 185393 DEBUG ovsdbapp.backend.ovs_idl [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.785 185393 DEBUG ovsdbapp.backend.ovs_idl [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.786 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.787 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.788 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.789 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.791 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.795 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.809 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.810 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.810 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:37:28 compute-0 nova_compute[185389]: 2026-01-26 16:37:28.812 185393 INFO oslo.privsep.daemon [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpvsxzom61/privsep.sock']
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.283 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.537 185393 INFO oslo.privsep.daemon [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Spawned new privsep daemon via rootwrap
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.399 238667 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.407 238667 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.411 238667 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.411 238667 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238667
Jan 26 16:37:29 compute-0 podman[201244]: time="2026-01-26T16:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:37:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 16:37:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3861 "" "Go-http-client/1.1"
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.858 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.859 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f88f3ae-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.860 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f88f3ae-fb, col_values=(('external_ids', {'iface-id': '0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:51:31', 'vm-uuid': '60ba224f-9c5d-4eb4-b501-66d7339832b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.864 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:29 compute-0 NetworkManager[56253]: <info>  [1769445449.8664] manager: (tap0f88f3ae-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.867 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.882 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.884 185393 INFO os_vif [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb')
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.973 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.974 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.974 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.974 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No VIF found with MAC fa:16:3e:b0:51:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:37:29 compute-0 nova_compute[185389]: 2026-01-26 16:37:29.975 185393 INFO nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Using config drive
Jan 26 16:37:30 compute-0 nova_compute[185389]: 2026-01-26 16:37:30.884 185393 DEBUG nova.network.neutron [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated VIF entry in instance network info cache for port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:37:30 compute-0 nova_compute[185389]: 2026-01-26 16:37:30.885 185393 DEBUG nova.network.neutron [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:37:30 compute-0 nova_compute[185389]: 2026-01-26 16:37:30.917 185393 DEBUG oslo_concurrency.lockutils [req-e125c227-2abd-4c6d-8c5c-df7c984c65f5 req-8823b099-2ca2-4ae9-bb18-14a3b2e27206 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.132 185393 INFO nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Creating config drive at /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.config
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.145 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkm4vh38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:31 compute-0 podman[238673]: 2026-01-26 16:37:31.288640309 +0000 UTC m=+0.161226864 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.297 185393 DEBUG oslo_concurrency.processutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkm4vh38" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:31 compute-0 openstack_network_exporter[204387]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:37:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:37:31 compute-0 openstack_network_exporter[204387]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:37:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:37:31 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 26 16:37:31 compute-0 kernel: tap0f88f3ae-fb: entered promiscuous mode
Jan 26 16:37:31 compute-0 NetworkManager[56253]: <info>  [1769445451.4536] manager: (tap0f88f3ae-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.457 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:31 compute-0 ovn_controller[97699]: 2026-01-26T16:37:31Z|00027|binding|INFO|Claiming lport 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f for this chassis.
Jan 26 16:37:31 compute-0 ovn_controller[97699]: 2026-01-26T16:37:31Z|00028|binding|INFO|0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f: Claiming fa:16:3e:b0:51:31 192.168.0.57
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.465 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:31.480 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:51:31 192.168.0.57'], port_security=['fa:16:3e:b0:51:31 192.168.0.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.57/24', 'neutron:device_id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:31.482 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe bound to our chassis
Jan 26 16:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:31.484 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:31.485 106955 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpp3bl94a3/privsep.sock']
Jan 26 16:37:31 compute-0 systemd-udevd[238717]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:37:31 compute-0 NetworkManager[56253]: <info>  [1769445451.5410] device (tap0f88f3ae-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:37:31 compute-0 NetworkManager[56253]: <info>  [1769445451.5419] device (tap0f88f3ae-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.538 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:31 compute-0 ovn_controller[97699]: 2026-01-26T16:37:31Z|00029|binding|INFO|Setting lport 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f ovn-installed in OVS
Jan 26 16:37:31 compute-0 ovn_controller[97699]: 2026-01-26T16:37:31Z|00030|binding|INFO|Setting lport 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f up in Southbound
Jan 26 16:37:31 compute-0 nova_compute[185389]: 2026-01-26 16:37:31.549 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:31 compute-0 systemd-machined[156679]: New machine qemu-1-instance-00000001.
Jan 26 16:37:31 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.171 106955 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.173 106955 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp3bl94a3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.040 238734 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.044 238734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.046 238734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.047 238734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238734
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.176 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4c632b-4db8-4a7f-bb0b-6835c4b10de7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.355 185393 DEBUG nova.compute.manager [req-afa97cbe-dc99-4f2f-98eb-bde5a7a91840 req-fb9e2edf-47bb-4a21-bfd6-2544b31235ea 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.355 185393 DEBUG oslo_concurrency.lockutils [req-afa97cbe-dc99-4f2f-98eb-bde5a7a91840 req-fb9e2edf-47bb-4a21-bfd6-2544b31235ea 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.355 185393 DEBUG oslo_concurrency.lockutils [req-afa97cbe-dc99-4f2f-98eb-bde5a7a91840 req-fb9e2edf-47bb-4a21-bfd6-2544b31235ea 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.356 185393 DEBUG oslo_concurrency.lockutils [req-afa97cbe-dc99-4f2f-98eb-bde5a7a91840 req-fb9e2edf-47bb-4a21-bfd6-2544b31235ea 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.356 185393 DEBUG nova.compute.manager [req-afa97cbe-dc99-4f2f-98eb-bde5a7a91840 req-fb9e2edf-47bb-4a21-bfd6-2544b31235ea 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Processing event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.460 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445452.4585142, 60ba224f-9c5d-4eb4-b501-66d7339832b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.460 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] VM Started (Lifecycle Event)
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.463 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.479 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.484 185393 INFO nova.virt.libvirt.driver [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Instance spawned successfully.
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.484 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.515 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.519 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.548 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.549 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445452.4586267, 60ba224f-9c5d-4eb4-b501-66d7339832b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.549 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] VM Paused (Lifecycle Event)
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.558 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.558 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.559 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.559 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.559 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.560 185393 DEBUG nova.virt.libvirt.driver [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.567 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.572 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445452.4791052, 60ba224f-9c5d-4eb4-b501-66d7339832b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.572 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] VM Resumed (Lifecycle Event)
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.608 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.614 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.624 185393 INFO nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Took 9.59 seconds to spawn the instance on the hypervisor.
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.625 185393 DEBUG nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.634 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:37:32 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.697 185393 INFO nova.compute.manager [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Took 10.98 seconds to build instance.
Jan 26 16:37:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.737 185393 DEBUG oslo_concurrency.lockutils [None req-c5ee3c3a-e91b-4860-a24d-363a28f37b03 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.748 238734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.749 238734 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:32.749 238734 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.822 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.822 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.823 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.823 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.823 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.851 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.851 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.851 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.852 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:37:32 compute-0 nova_compute[185389]: 2026-01-26 16:37:32.961 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.061 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.062 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.125 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.127 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.192 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.193 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.268 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.341 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[737c7232-2f7c-46ca-b083-c5664d4136ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.343 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74318d1e-b1 in ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.348 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74318d1e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.349 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3f137e80-43c2-4407-8275-5525a7a2a3c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.379 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d8758437-a900-44a8-b684-975d9d193de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.413 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd69c74-def8-4693-87c1-f317e7624fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.438 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf30991-2e22-4669-8716-d28ad7e532ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:33.441 106955 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpkanlei8v/privsep.sock']
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.663 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.665 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5491MB free_disk=72.44707870483398GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.665 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:33 compute-0 nova_compute[185389]: 2026-01-26 16:37:33.665 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.074 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.074 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.075 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.137 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.177 106955 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.179 106955 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkanlei8v/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.009 238787 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.015 238787 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.017 238787 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.018 238787 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238787
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.183 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1def73b-874f-4283-8dec-ab4b5e00d245]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.212 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.212 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.233 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.251 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.285 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.302 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.594 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updated inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.595 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.595 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.717 238787 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.717 238787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:34.717 238787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.779 185393 DEBUG nova.compute.manager [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.779 185393 DEBUG oslo_concurrency.lockutils [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.780 185393 DEBUG oslo_concurrency.lockutils [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.781 185393 DEBUG oslo_concurrency.lockutils [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.782 185393 DEBUG nova.compute.manager [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] No waiting events found dispatching network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.782 185393 WARNING nova.compute.manager [req-26cc99b6-a033-4046-b917-f87a9609b7bf req-42416327-ee5a-4279-bfa3-15625f20bdd5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received unexpected event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f for instance with vm_state active and task_state None.
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.851 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.852 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.853 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.855 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:37:34 compute-0 nova_compute[185389]: 2026-01-26 16:37:34.865 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.051 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.053 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.054 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.483 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[45ba43f9-91ff-4a92-992d-5438ba4b423a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.528 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[fb385b0b-dc63-4a3a-abb1-73db7ca07ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 NetworkManager[56253]: <info>  [1769445455.5308] manager: (tap74318d1e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.597 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[1853180f-f053-40c5-9f2c-c543c9b097c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.607 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[56712ef8-4ecc-46c8-8066-0b58527b315a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 systemd-udevd[238800]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:37:35 compute-0 NetworkManager[56253]: <info>  [1769445455.6570] device (tap74318d1e-b0): carrier: link connected
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.681 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[79d4290b-e68b-457a-9935-2cdf7cedc8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.710 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7bacbf-7dee-48bb-9819-8d40f839d043]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 25697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238817, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.735 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d24a3e97-0a45-4adb-8826-dcb25f106f95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:6c31'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410415, 'tstamp': 410415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238818, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.761 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[69d3694c-8c6d-45bc-812c-72665a0222e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 25697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238819, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.804 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[020bf38c-27d8-40dc-b8e6-bb0b0eb15d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.905 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[253bd5ff-ed57-4d06-bfc9-9c5564145879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.908 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.909 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.910 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:35 compute-0 kernel: tap74318d1e-b0: entered promiscuous mode
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.915 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 NetworkManager[56253]: <info>  [1769445455.9168] manager: (tap74318d1e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.922 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:37:35 compute-0 ovn_controller[97699]: 2026-01-26T16:37:35Z|00031|binding|INFO|Releasing lport 6045fbea-609e-4588-93b4-ca6dda4224d1 from this chassis (sb_readonly=0)
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.929 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.951 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74318d1e-b1d8-47d5-8ac3-218d758610fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74318d1e-b1d8-47d5-8ac3-218d758610fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.954 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[eea2fc99-40a0-4acc-9166-42f4a8d6d90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:37:35 compute-0 nova_compute[185389]: 2026-01-26 16:37:35.956 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.958 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: global
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/74318d1e-b1d8-47d5-8ac3-218d758610fe.pid.haproxy
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:37:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:37:35.965 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'env', 'PROCESS_TAG=haproxy-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74318d1e-b1d8-47d5-8ac3-218d758610fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.112 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.113 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.139 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.140 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.141 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.459 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.460 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.461 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:37:36 compute-0 nova_compute[185389]: 2026-01-26 16:37:36.461 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:37:36 compute-0 podman[238849]: 2026-01-26 16:37:36.470064024 +0000 UTC m=+0.069722214 container create 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:37:36 compute-0 systemd[1]: Started libpod-conmon-808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2.scope.
Jan 26 16:37:36 compute-0 podman[238849]: 2026-01-26 16:37:36.433619883 +0000 UTC m=+0.033278103 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:37:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67375470e25eafe029c314f624bc375894da7136f274d4d3e7bfb738006d44cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:36 compute-0 podman[238849]: 2026-01-26 16:37:36.585097877 +0000 UTC m=+0.184756137 container init 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:37:36 compute-0 podman[238849]: 2026-01-26 16:37:36.6021714 +0000 UTC m=+0.201829630 container start 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:37:36 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [NOTICE]   (238869) : New worker (238871) forked
Jan 26 16:37:36 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [NOTICE]   (238869) : Loading success.
Jan 26 16:37:39 compute-0 podman[238880]: 2026-01-26 16:37:39.269384056 +0000 UTC m=+0.139110811 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.286 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.462 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.490 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.491 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.492 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.493 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.496 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.497 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.520 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.521 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.522 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.569 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:39 compute-0 nova_compute[185389]: 2026-01-26 16:37:39.869 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:42 compute-0 podman[238903]: 2026-01-26 16:37:42.221567677 +0000 UTC m=+0.106013722 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_managed=true)
Jan 26 16:37:44 compute-0 nova_compute[185389]: 2026-01-26 16:37:44.290 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:44 compute-0 nova_compute[185389]: 2026-01-26 16:37:44.871 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:47 compute-0 podman[238923]: 2026-01-26 16:37:47.23206662 +0000 UTC m=+0.096582520 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:37:48 compute-0 ovn_controller[97699]: 2026-01-26T16:37:48Z|00032|binding|INFO|Releasing lport 6045fbea-609e-4588-93b4-ca6dda4224d1 from this chassis (sb_readonly=0)
Jan 26 16:37:48 compute-0 nova_compute[185389]: 2026-01-26 16:37:48.378 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3801] manager: (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3826] device (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <warn>  [1769445468.3828] device (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3874] manager: (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3898] device (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <warn>  [1769445468.3899] device (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3949] manager: (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.3978] manager: (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.4001] device (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 16:37:48 compute-0 NetworkManager[56253]: <info>  [1769445468.4022] device (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 16:37:48 compute-0 ovn_controller[97699]: 2026-01-26T16:37:48Z|00033|binding|INFO|Releasing lport 6045fbea-609e-4588-93b4-ca6dda4224d1 from this chassis (sb_readonly=0)
Jan 26 16:37:48 compute-0 nova_compute[185389]: 2026-01-26 16:37:48.407 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:48 compute-0 nova_compute[185389]: 2026-01-26 16:37:48.419 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.013 185393 DEBUG nova.compute.manager [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-changed-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.014 185393 DEBUG nova.compute.manager [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Refreshing instance network info cache due to event network-changed-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.015 185393 DEBUG oslo_concurrency.lockutils [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.015 185393 DEBUG oslo_concurrency.lockutils [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.015 185393 DEBUG nova.network.neutron [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Refreshing network info cache for port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.293 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:49 compute-0 nova_compute[185389]: 2026-01-26 16:37:49.874 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:50 compute-0 podman[238949]: 2026-01-26 16:37:50.291144757 +0000 UTC m=+0.153921211 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:37:52 compute-0 podman[238966]: 2026-01-26 16:37:52.214492636 +0000 UTC m=+0.101643492 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:37:54 compute-0 podman[238985]: 2026-01-26 16:37:54.235401171 +0000 UTC m=+0.096238651 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 26 16:37:54 compute-0 nova_compute[185389]: 2026-01-26 16:37:54.295 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:54 compute-0 podman[239003]: 2026-01-26 16:37:54.455998051 +0000 UTC m=+0.168073904 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 16:37:54 compute-0 nova_compute[185389]: 2026-01-26 16:37:54.878 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:55 compute-0 nova_compute[185389]: 2026-01-26 16:37:55.411 185393 DEBUG nova.network.neutron [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated VIF entry in instance network info cache for port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:37:55 compute-0 nova_compute[185389]: 2026-01-26 16:37:55.412 185393 DEBUG nova.network.neutron [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:37:55 compute-0 nova_compute[185389]: 2026-01-26 16:37:55.438 185393 DEBUG oslo_concurrency.lockutils [req-484e7737-f5f2-4d29-9e8f-fde43247bacf req-52728f50-450e-4bb9-8d79-b3934a159f79 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:37:59 compute-0 nova_compute[185389]: 2026-01-26 16:37:59.301 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:59 compute-0 podman[201244]: time="2026-01-26T16:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:37:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:37:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Jan 26 16:37:59 compute-0 nova_compute[185389]: 2026-01-26 16:37:59.881 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:01 compute-0 openstack_network_exporter[204387]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:38:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:38:01 compute-0 openstack_network_exporter[204387]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:38:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:01.716 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:01.720 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:01.722 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:02 compute-0 podman[239028]: 2026-01-26 16:38:02.193207221 +0000 UTC m=+0.072956785 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:38:04 compute-0 nova_compute[185389]: 2026-01-26 16:38:04.307 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:04 compute-0 nova_compute[185389]: 2026-01-26 16:38:04.885 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:06 compute-0 ovn_controller[97699]: 2026-01-26T16:38:06Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:51:31 192.168.0.57
Jan 26 16:38:06 compute-0 ovn_controller[97699]: 2026-01-26T16:38:06Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:51:31 192.168.0.57
Jan 26 16:38:09 compute-0 nova_compute[185389]: 2026-01-26 16:38:09.310 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:09 compute-0 nova_compute[185389]: 2026-01-26 16:38:09.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:10 compute-0 podman[239062]: 2026-01-26 16:38:10.256986321 +0000 UTC m=+0.138115839 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public)
Jan 26 16:38:13 compute-0 podman[239082]: 2026-01-26 16:38:13.227556995 +0000 UTC m=+0.119850273 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 26 16:38:14 compute-0 nova_compute[185389]: 2026-01-26 16:38:14.314 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:14 compute-0 nova_compute[185389]: 2026-01-26 16:38:14.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:18 compute-0 podman[239103]: 2026-01-26 16:38:18.192481419 +0000 UTC m=+0.075043947 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:38:18 compute-0 ovn_controller[97699]: 2026-01-26T16:38:18Z|00034|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Jan 26 16:38:19 compute-0 nova_compute[185389]: 2026-01-26 16:38:19.317 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:19 compute-0 nova_compute[185389]: 2026-01-26 16:38:19.897 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:21 compute-0 podman[239126]: 2026-01-26 16:38:21.610630876 +0000 UTC m=+0.482811790 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:38:23 compute-0 podman[239145]: 2026-01-26 16:38:23.247577028 +0000 UTC m=+0.125587917 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:38:24 compute-0 nova_compute[185389]: 2026-01-26 16:38:24.318 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:24 compute-0 nova_compute[185389]: 2026-01-26 16:38:24.901 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:25 compute-0 podman[239165]: 2026-01-26 16:38:25.257490542 +0000 UTC m=+0.114882368 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, name=ubi9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, maintainer=Red Hat, Inc.)
Jan 26 16:38:25 compute-0 podman[239164]: 2026-01-26 16:38:25.283656781 +0000 UTC m=+0.163774754 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:38:29 compute-0 nova_compute[185389]: 2026-01-26 16:38:29.323 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:29 compute-0 podman[201244]: time="2026-01-26T16:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:38:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:38:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Jan 26 16:38:29 compute-0 nova_compute[185389]: 2026-01-26 16:38:29.905 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.331 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.342 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:38:31 compute-0 openstack_network_exporter[204387]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:38:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:38:31 compute-0 openstack_network_exporter[204387]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:38:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:31.759 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/60ba224f-9c5d-4eb4-b501-66d7339832b9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.553 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Mon, 26 Jan 2026 16:38:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2f9c43da-89b4-4384-9fc4-8b493363ef8a x-openstack-request-id: req-2f9c43da-89b4-4384-9fc4-8b493363ef8a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.554 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "60ba224f-9c5d-4eb4-b501-66d7339832b9", "name": "test_0", "status": "ACTIVE", "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "user_id": "3c0ab9326d69400aa6a4a91432885d7f", "metadata": {}, "hostId": "5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b", "image": {"id": "718285d9-0264-40f4-9fb3-d2faff180284", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/718285d9-0264-40f4-9fb3-d2faff180284"}]}, "flavor": {"id": "c2a8df4d-a1d7-42a3-8279-8c7de8a1a662", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c2a8df4d-a1d7-42a3-8279-8c7de8a1a662"}]}, "created": "2026-01-26T16:37:18Z", "updated": "2026-01-26T16:37:32Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.57", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b0:51:31"}, {"version": 4, "addr": "192.168.122.234", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b0:51:31"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/60ba224f-9c5d-4eb4-b501-66d7339832b9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/60ba224f-9c5d-4eb4-b501-66d7339832b9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-26T16:37:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.554 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/60ba224f-9c5d-4eb4-b501-66d7339832b9 used request id req-2f9c43da-89b4-4384-9fc4-8b493363ef8a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.557 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:38:32.557814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 864322361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:38:32.679443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.685 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:38:32.685051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.686 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.687 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:38:32.689912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.700 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 60ba224f-9c5d-4eb4-b501-66d7339832b9 / tap0f88f3ae-fb inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.700 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.701 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T16:38:32.701591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.702 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:38:32.703390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.751 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 33610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.752 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.753 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.754 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.754 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:38:32.753066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:38:32.754700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.756 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.757 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.757 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 1992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:38:32.756094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.759 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:38:32.757370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.761 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.761 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:38:32.758901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.763 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:38:32.760241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.763 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.763 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:38:32.761448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 49.5625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T16:38:32.762801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.766 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.767 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.768 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.768 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:38:32.764289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.769 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:38:32.765437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:38:32.766711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:38:32.767897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:38:32.769405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:38:32.770752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.794 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.795 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.795 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.796 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.797 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.797 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:38:32.797102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.798 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.798 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:38:32.799498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.800 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.800 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:38:32.801448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.801 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.802 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.802 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:38:32.803460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.805 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.806 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.806 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:38:32.805692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.808 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:38:32.807798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.808 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.808 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:38:32.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:38:33 compute-0 podman[239212]: 2026-01-26 16:38:33.229245444 +0000 UTC m=+0.087099435 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:38:33 compute-0 nova_compute[185389]: 2026-01-26 16:38:33.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:33 compute-0 nova_compute[185389]: 2026-01-26 16:38:33.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:38:33 compute-0 nova_compute[185389]: 2026-01-26 16:38:33.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.007 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.008 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.008 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.009 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.326 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:34 compute-0 nova_compute[185389]: 2026-01-26 16:38:34.909 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:35 compute-0 nova_compute[185389]: 2026-01-26 16:38:35.775 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.017 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.018 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.019 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.020 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.021 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.022 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.023 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.024 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.024 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.051 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.051 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.052 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.053 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.162 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.229 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.231 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.339 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.340 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.406 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.407 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.494 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.880 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.882 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5247MB free_disk=72.42624282836914GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.882 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:36 compute-0 nova_compute[185389]: 2026-01-26 16:38:36.883 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.005 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.006 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.006 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.073 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.094 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.096 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.096 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:37.250 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:38:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:37.252 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:38:37 compute-0 nova_compute[185389]: 2026-01-26 16:38:37.253 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:39 compute-0 nova_compute[185389]: 2026-01-26 16:38:39.091 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:39 compute-0 nova_compute[185389]: 2026-01-26 16:38:39.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:39 compute-0 nova_compute[185389]: 2026-01-26 16:38:39.913 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:40.257 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:40 compute-0 nova_compute[185389]: 2026-01-26 16:38:40.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:41 compute-0 podman[239246]: 2026-01-26 16:38:41.223647167 +0000 UTC m=+0.107106416 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Jan 26 16:38:44 compute-0 podman[239269]: 2026-01-26 16:38:44.229743163 +0000 UTC m=+0.110684144 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 16:38:44 compute-0 nova_compute[185389]: 2026-01-26 16:38:44.331 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:44 compute-0 nova_compute[185389]: 2026-01-26 16:38:44.917 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:46 compute-0 nova_compute[185389]: 2026-01-26 16:38:46.895 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:46 compute-0 nova_compute[185389]: 2026-01-26 16:38:46.896 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:46 compute-0 nova_compute[185389]: 2026-01-26 16:38:46.914 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:38:46 compute-0 nova_compute[185389]: 2026-01-26 16:38:46.994 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:46 compute-0 nova_compute[185389]: 2026-01-26 16:38:46.995 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.005 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.006 185393 INFO nova.compute.claims [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.141 185393 DEBUG nova.compute.provider_tree [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.159 185393 DEBUG nova.scheduler.client.report [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.188 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.190 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.278 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.279 185393 DEBUG nova.network.neutron [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.304 185393 INFO nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.341 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.426 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.428 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.428 185393 INFO nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Creating image(s)
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.429 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.430 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.431 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.449 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.511 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.512 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.513 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.528 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.606 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.607 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.676 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.678 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.679 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.762 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.763 185393 DEBUG nova.virt.disk.api [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.764 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.822 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.823 185393 DEBUG nova.virt.disk.api [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.823 185393 DEBUG nova.objects.instance [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid 2ee04f75-dc75-489c-85b5-19cd6d573bf1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.836 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.836 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.837 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.850 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.914 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.915 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.915 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:47 compute-0 nova_compute[185389]: 2026-01-26 16:38:47.926 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.018 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.019 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.067 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.068 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.069 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.152 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.153 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.154 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Ensure instance console log exists: /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.154 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.154 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:48 compute-0 nova_compute[185389]: 2026-01-26 16:38:48.155 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:49 compute-0 podman[239317]: 2026-01-26 16:38:49.222200963 +0000 UTC m=+0.093389055 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:38:49 compute-0 nova_compute[185389]: 2026-01-26 16:38:49.334 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:49 compute-0 nova_compute[185389]: 2026-01-26 16:38:49.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.320 185393 DEBUG nova.network.neutron [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Successfully updated port: 5e252863-184d-4e1e-a33d-6e280cd72b51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.338 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.339 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.340 185393 DEBUG nova.network.neutron [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.494 185393 DEBUG nova.network.neutron [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.708 185393 DEBUG nova.compute.manager [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-changed-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.709 185393 DEBUG nova.compute.manager [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Refreshing instance network info cache due to event network-changed-5e252863-184d-4e1e-a33d-6e280cd72b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:38:51 compute-0 nova_compute[185389]: 2026-01-26 16:38:51.709 185393 DEBUG oslo_concurrency.lockutils [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:38:52 compute-0 podman[239341]: 2026-01-26 16:38:52.224532149 +0000 UTC m=+0.110635482 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.794 185393 DEBUG nova.network.neutron [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.817 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.818 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Instance network_info: |[{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.820 185393 DEBUG oslo_concurrency.lockutils [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.821 185393 DEBUG nova.network.neutron [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Refreshing network info cache for port 5e252863-184d-4e1e-a33d-6e280cd72b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.827 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Start _get_guest_xml network_info=[{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '718285d9-0264-40f4-9fb3-d2faff180284'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.841 185393 WARNING nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.851 185393 DEBUG nova.virt.libvirt.host [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.851 185393 DEBUG nova.virt.libvirt.host [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.857 185393 DEBUG nova.virt.libvirt.host [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.857 185393 DEBUG nova.virt.libvirt.host [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.858 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.859 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T16:35:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c2a8df4d-a1d7-42a3-8279-8c7de8a1a662',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.860 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.860 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.861 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.861 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.862 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.862 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.863 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.863 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.864 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.864 185393 DEBUG nova.virt.hardware [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.869 185393 DEBUG nova.virt.libvirt.vif [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:38:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',id=2,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-tcf070kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:38:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 26 16:38:52 compute-0 nova_compute[185389]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=2ee04f75-dc75-489c-85b5-19cd6d573bf1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.870 185393 DEBUG nova.network.os_vif_util [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.871 185393 DEBUG nova.network.os_vif_util [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.873 185393 DEBUG nova.objects.instance [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ee04f75-dc75-489c-85b5-19cd6d573bf1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.891 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <uuid>2ee04f75-dc75-489c-85b5-19cd6d573bf1</uuid>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <name>instance-00000002</name>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <metadata>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:name>vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe</nova:name>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 16:38:52</nova:creationTime>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:flavor name="m1.small">
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="718285d9-0264-40f4-9fb3-d2faff180284"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         <nova:port uuid="5e252863-184d-4e1e-a33d-6e280cd72b51">
Jan 26 16:38:52 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="192.168.0.173" ipVersion="4"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </metadata>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <system>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="serial">2ee04f75-dc75-489c-85b5-19cd6d573bf1</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="uuid">2ee04f75-dc75-489c-85b5-19cd6d573bf1</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </system>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <os>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </os>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <features>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <apic/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </features>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </clock>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.config"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:65:38:01"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <target dev="tap5e252863-18"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/console.log" append="off"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </serial>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <video>
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </video>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 16:38:52 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 16:38:52 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 16:38:52 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:38:52 compute-0 nova_compute[185389]: </domain>
Jan 26 16:38:52 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.893 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Preparing to wait for external event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.893 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.894 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.894 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.895 185393 DEBUG nova.virt.libvirt.vif [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:38:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',id=2,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-tcf070kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:38:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 26 16:38:52 compute-0 nova_compute[185389]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=2ee04f75-dc75-489c-85b5-19cd6d573bf1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.895 185393 DEBUG nova.network.os_vif_util [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.896 185393 DEBUG nova.network.os_vif_util [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.897 185393 DEBUG os_vif [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.897 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.898 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.898 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.904 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.904 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e252863-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.905 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e252863-18, col_values=(('external_ids', {'iface-id': '5e252863-184d-4e1e-a33d-6e280cd72b51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:38:01', 'vm-uuid': '2ee04f75-dc75-489c-85b5-19cd6d573bf1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.908 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:52 compute-0 NetworkManager[56253]: <info>  [1769445532.9098] manager: (tap5e252863-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.911 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.921 185393 INFO os_vif [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18')
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.975 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.976 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.976 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.976 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No VIF found with MAC fa:16:3e:65:38:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:38:52 compute-0 nova_compute[185389]: 2026-01-26 16:38:52.977 185393 INFO nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Using config drive
Jan 26 16:38:53 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:38:52.869 185393 DEBUG nova.virt.libvirt.vif [None req-42d82ff3-6b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:38:53 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:38:52.895 185393 DEBUG nova.virt.libvirt.vif [None req-42d82ff3-6b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:38:53 compute-0 nova_compute[185389]: 2026-01-26 16:38:53.725 185393 INFO nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Creating config drive at /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.config
Jan 26 16:38:53 compute-0 nova_compute[185389]: 2026-01-26 16:38:53.736 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs8ge_9f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:53 compute-0 nova_compute[185389]: 2026-01-26 16:38:53.885 185393 DEBUG oslo_concurrency.processutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs8ge_9f" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:53 compute-0 kernel: tap5e252863-18: entered promiscuous mode
Jan 26 16:38:53 compute-0 NetworkManager[56253]: <info>  [1769445533.9753] manager: (tap5e252863-18): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 26 16:38:53 compute-0 ovn_controller[97699]: 2026-01-26T16:38:53Z|00035|binding|INFO|Claiming lport 5e252863-184d-4e1e-a33d-6e280cd72b51 for this chassis.
Jan 26 16:38:53 compute-0 nova_compute[185389]: 2026-01-26 16:38:53.978 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:53 compute-0 ovn_controller[97699]: 2026-01-26T16:38:53Z|00036|binding|INFO|5e252863-184d-4e1e-a33d-6e280cd72b51: Claiming fa:16:3e:65:38:01 192.168.0.173
Jan 26 16:38:53 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:53.987 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:38:01 192.168.0.173'], port_security=['fa:16:3e:65:38:01 192.168.0.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-port-7427fbcuf3nf', 'neutron:cidrs': '192.168.0.173/24', 'neutron:device_id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-port-7427fbcuf3nf', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=5e252863-184d-4e1e-a33d-6e280cd72b51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:38:53 compute-0 ovn_controller[97699]: 2026-01-26T16:38:53Z|00037|binding|INFO|Setting lport 5e252863-184d-4e1e-a33d-6e280cd72b51 ovn-installed in OVS
Jan 26 16:38:53 compute-0 ovn_controller[97699]: 2026-01-26T16:38:53Z|00038|binding|INFO|Setting lport 5e252863-184d-4e1e-a33d-6e280cd72b51 up in Southbound
Jan 26 16:38:53 compute-0 nova_compute[185389]: 2026-01-26 16:38:53.997 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:53.989 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 5e252863-184d-4e1e-a33d-6e280cd72b51 in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe bound to our chassis
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:53.993 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.012 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.015 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[68198756-9ec1-4fe3-98c8-16965d0d74c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 systemd-machined[156679]: New machine qemu-2-instance-00000002.
Jan 26 16:38:54 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 26 16:38:54 compute-0 systemd-udevd[239403]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.051 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a45807a-af7c-43ed-abdb-bf48347313af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.054 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[b22409a3-690a-4b6e-8198-11b2de8c7e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 NetworkManager[56253]: <info>  [1769445534.0764] device (tap5e252863-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:38:54 compute-0 NetworkManager[56253]: <info>  [1769445534.0771] device (tap5e252863-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:38:54 compute-0 podman[239372]: 2026-01-26 16:38:54.08033418 +0000 UTC m=+0.118436255 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.081 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4fd104-4e92-46af-8de3-62aaf01300f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.097 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d43a6899-54c6-4052-946b-0881f46d7e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 36703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239411, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.116 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d808f4-061f-4583-8f59-3672da93b71e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239414, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239414, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.119 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.121 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.123 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.123 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.124 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.124 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:54 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:38:54.124 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.336 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.455 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445534.4540606, 2ee04f75-dc75-489c-85b5-19cd6d573bf1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.455 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] VM Started (Lifecycle Event)
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.544 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.553 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445534.4542317, 2ee04f75-dc75-489c-85b5-19cd6d573bf1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.554 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] VM Paused (Lifecycle Event)
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.652 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.672 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.708 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.916 185393 DEBUG nova.compute.manager [req-dc55173b-80b7-46da-be7f-ed4283073876 req-ca3f78d9-8a2b-4112-b01f-0f2fb57d6a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.916 185393 DEBUG oslo_concurrency.lockutils [req-dc55173b-80b7-46da-be7f-ed4283073876 req-ca3f78d9-8a2b-4112-b01f-0f2fb57d6a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.917 185393 DEBUG oslo_concurrency.lockutils [req-dc55173b-80b7-46da-be7f-ed4283073876 req-ca3f78d9-8a2b-4112-b01f-0f2fb57d6a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.917 185393 DEBUG oslo_concurrency.lockutils [req-dc55173b-80b7-46da-be7f-ed4283073876 req-ca3f78d9-8a2b-4112-b01f-0f2fb57d6a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.918 185393 DEBUG nova.compute.manager [req-dc55173b-80b7-46da-be7f-ed4283073876 req-ca3f78d9-8a2b-4112-b01f-0f2fb57d6a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Processing event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:38:54 compute-0 nova_compute[185389]: 2026-01-26 16:38:54.919 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.072 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445534.9271753, 2ee04f75-dc75-489c-85b5-19cd6d573bf1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.073 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] VM Resumed (Lifecycle Event)
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.077 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.084 185393 INFO nova.virt.libvirt.driver [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Instance spawned successfully.
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.084 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.105 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.115 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.119 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.119 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.120 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.120 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.121 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.121 185393 DEBUG nova.virt.libvirt.driver [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.145 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.184 185393 INFO nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Took 7.76 seconds to spawn the instance on the hypervisor.
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.185 185393 DEBUG nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.251 185393 INFO nova.compute.manager [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Took 8.29 seconds to build instance.
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.266 185393 DEBUG oslo_concurrency.lockutils [None req-42d82ff3-6b65-4728-ab72-613229ebd66a 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.476 185393 DEBUG nova.network.neutron [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated VIF entry in instance network info cache for port 5e252863-184d-4e1e-a33d-6e280cd72b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.477 185393 DEBUG nova.network.neutron [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:38:55 compute-0 nova_compute[185389]: 2026-01-26 16:38:55.497 185393 DEBUG oslo_concurrency.lockutils [req-ac90e358-04b0-40df-b022-529c1bdb8fb2 req-a7d529c0-b320-496a-85b8-c3dd145373bb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:38:56 compute-0 podman[239424]: 2026-01-26 16:38:56.232578183 +0000 UTC m=+0.104256119 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=)
Jan 26 16:38:56 compute-0 podman[239423]: 2026-01-26 16:38:56.286439974 +0000 UTC m=+0.170748533 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.006 185393 DEBUG nova.compute.manager [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.006 185393 DEBUG oslo_concurrency.lockutils [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.007 185393 DEBUG oslo_concurrency.lockutils [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.007 185393 DEBUG oslo_concurrency.lockutils [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.007 185393 DEBUG nova.compute.manager [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] No waiting events found dispatching network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.007 185393 WARNING nova.compute.manager [req-04742f18-7055-4e1c-8b99-1ec8124c7520 req-00106292-76ce-475c-b03a-de38fd683c4b 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received unexpected event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 for instance with vm_state active and task_state None.
Jan 26 16:38:57 compute-0 nova_compute[185389]: 2026-01-26 16:38:57.908 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:59 compute-0 nova_compute[185389]: 2026-01-26 16:38:59.339 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:59 compute-0 podman[201244]: time="2026-01-26T16:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:38:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:38:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Jan 26 16:39:01 compute-0 openstack_network_exporter[204387]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:39:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:39:01 compute-0 openstack_network_exporter[204387]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:39:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:39:01.716 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:39:01.718 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:39:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:02 compute-0 nova_compute[185389]: 2026-01-26 16:39:02.913 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:04 compute-0 podman[239469]: 2026-01-26 16:39:04.202388963 +0000 UTC m=+0.075750126 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:39:04 compute-0 nova_compute[185389]: 2026-01-26 16:39:04.340 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:07 compute-0 nova_compute[185389]: 2026-01-26 16:39:07.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:09 compute-0 nova_compute[185389]: 2026-01-26 16:39:09.343 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:12 compute-0 podman[239494]: 2026-01-26 16:39:12.242103491 +0000 UTC m=+0.117702635 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 26 16:39:12 compute-0 nova_compute[185389]: 2026-01-26 16:39:12.925 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:14 compute-0 nova_compute[185389]: 2026-01-26 16:39:14.346 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:14 compute-0 podman[239514]: 2026-01-26 16:39:14.768293889 +0000 UTC m=+0.081111051 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:39:17 compute-0 nova_compute[185389]: 2026-01-26 16:39:17.929 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:19 compute-0 nova_compute[185389]: 2026-01-26 16:39:19.349 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:20 compute-0 podman[239535]: 2026-01-26 16:39:20.474373441 +0000 UTC m=+0.072747594 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:39:22 compute-0 nova_compute[185389]: 2026-01-26 16:39:22.934 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:23 compute-0 podman[239558]: 2026-01-26 16:39:23.216874156 +0000 UTC m=+0.094973827 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 16:39:24 compute-0 ovn_controller[97699]: 2026-01-26T16:39:24Z|00039|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Jan 26 16:39:24 compute-0 nova_compute[185389]: 2026-01-26 16:39:24.353 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:25 compute-0 podman[239577]: 2026-01-26 16:39:25.264290845 +0000 UTC m=+0.128433136 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:39:27 compute-0 podman[239596]: 2026-01-26 16:39:27.235767673 +0000 UTC m=+0.123882571 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:39:27 compute-0 podman[239597]: 2026-01-26 16:39:27.252525838 +0000 UTC m=+0.121056795 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:39:27 compute-0 nova_compute[185389]: 2026-01-26 16:39:27.939 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:28 compute-0 ovn_controller[97699]: 2026-01-26T16:39:28Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:38:01 192.168.0.173
Jan 26 16:39:28 compute-0 ovn_controller[97699]: 2026-01-26T16:39:28Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:38:01 192.168.0.173
Jan 26 16:39:29 compute-0 nova_compute[185389]: 2026-01-26 16:39:29.357 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:29 compute-0 podman[201244]: time="2026-01-26T16:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:39:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:39:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 26 16:39:31 compute-0 openstack_network_exporter[204387]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:39:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:39:31 compute-0 openstack_network_exporter[204387]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:39:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:39:32 compute-0 nova_compute[185389]: 2026-01-26 16:39:32.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:33 compute-0 nova_compute[185389]: 2026-01-26 16:39:33.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:34 compute-0 nova_compute[185389]: 2026-01-26 16:39:34.361 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:34 compute-0 nova_compute[185389]: 2026-01-26 16:39:34.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:34 compute-0 nova_compute[185389]: 2026-01-26 16:39:34.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:39:34 compute-0 nova_compute[185389]: 2026-01-26 16:39:34.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:39:35 compute-0 podman[239646]: 2026-01-26 16:39:35.210239928 +0000 UTC m=+0.083944729 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:39:35 compute-0 nova_compute[185389]: 2026-01-26 16:39:35.542 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:39:35 compute-0 nova_compute[185389]: 2026-01-26 16:39:35.543 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:39:35 compute-0 nova_compute[185389]: 2026-01-26 16:39:35.543 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:39:35 compute-0 nova_compute[185389]: 2026-01-26 16:39:35.544 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:39:36 compute-0 nova_compute[185389]: 2026-01-26 16:39:36.911 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.006 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.007 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.008 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.009 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.009 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.010 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.010 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.011 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.103 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.103 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.104 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.104 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.286 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.371 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.372 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.442 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.445 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.547 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.548 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.648 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.655 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.737 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.738 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.833 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.836 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.937 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.939 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:37 compute-0 nova_compute[185389]: 2026-01-26 16:39:37.954 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.008 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.418 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.420 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5059MB free_disk=72.40424346923828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.420 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.420 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.588 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.589 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.589 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.589 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.673 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.689 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.876 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:39:38 compute-0 nova_compute[185389]: 2026-01-26 16:39:38.876 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:39 compute-0 nova_compute[185389]: 2026-01-26 16:39:39.365 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:40 compute-0 nova_compute[185389]: 2026-01-26 16:39:40.875 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:40 compute-0 nova_compute[185389]: 2026-01-26 16:39:40.875 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:41 compute-0 nova_compute[185389]: 2026-01-26 16:39:41.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:42 compute-0 nova_compute[185389]: 2026-01-26 16:39:42.959 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:43 compute-0 podman[239697]: 2026-01-26 16:39:43.235722315 +0000 UTC m=+0.107489448 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:39:44 compute-0 nova_compute[185389]: 2026-01-26 16:39:44.368 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:45 compute-0 podman[239718]: 2026-01-26 16:39:45.255272946 +0000 UTC m=+0.128088645 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 26 16:39:47 compute-0 nova_compute[185389]: 2026-01-26 16:39:47.963 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:49 compute-0 nova_compute[185389]: 2026-01-26 16:39:49.371 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:51 compute-0 podman[239737]: 2026-01-26 16:39:51.211052754 +0000 UTC m=+0.093604900 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:39:52 compute-0 nova_compute[185389]: 2026-01-26 16:39:52.968 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:54 compute-0 podman[239760]: 2026-01-26 16:39:54.171105234 +0000 UTC m=+0.058530369 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:39:54 compute-0 nova_compute[185389]: 2026-01-26 16:39:54.373 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:56 compute-0 podman[239778]: 2026-01-26 16:39:56.185828606 +0000 UTC m=+0.068584282 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:39:57 compute-0 nova_compute[185389]: 2026-01-26 16:39:57.973 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:58 compute-0 podman[239798]: 2026-01-26 16:39:58.190573087 +0000 UTC m=+0.076385272 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:39:58 compute-0 podman[239797]: 2026-01-26 16:39:58.208103142 +0000 UTC m=+0.098593475 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:39:59 compute-0 sshd-session[239841]: Invalid user user from 45.148.10.121 port 42714
Jan 26 16:39:59 compute-0 sshd-session[239841]: Connection closed by invalid user user 45.148.10.121 port 42714 [preauth]
Jan 26 16:39:59 compute-0 nova_compute[185389]: 2026-01-26 16:39:59.376 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:59 compute-0 podman[201244]: time="2026-01-26T16:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:39:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:39:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 26 16:40:01 compute-0 openstack_network_exporter[204387]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:40:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:40:01 compute-0 openstack_network_exporter[204387]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:40:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:40:01.718 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:40:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:40:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:02 compute-0 nova_compute[185389]: 2026-01-26 16:40:02.977 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:04 compute-0 nova_compute[185389]: 2026-01-26 16:40:04.378 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:06 compute-0 podman[239843]: 2026-01-26 16:40:06.177437554 +0000 UTC m=+0.065805257 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:40:07 compute-0 nova_compute[185389]: 2026-01-26 16:40:07.981 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:09 compute-0 nova_compute[185389]: 2026-01-26 16:40:09.382 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:12 compute-0 nova_compute[185389]: 2026-01-26 16:40:12.986 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:14 compute-0 podman[239867]: 2026-01-26 16:40:14.230915234 +0000 UTC m=+0.121685048 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git)
Jan 26 16:40:14 compute-0 nova_compute[185389]: 2026-01-26 16:40:14.386 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:16 compute-0 podman[239887]: 2026-01-26 16:40:16.236241733 +0000 UTC m=+0.116877607 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 26 16:40:17 compute-0 nova_compute[185389]: 2026-01-26 16:40:17.991 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:19 compute-0 nova_compute[185389]: 2026-01-26 16:40:19.390 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:22 compute-0 podman[239908]: 2026-01-26 16:40:22.177901516 +0000 UTC m=+0.070060181 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:40:22 compute-0 nova_compute[185389]: 2026-01-26 16:40:22.996 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:24 compute-0 nova_compute[185389]: 2026-01-26 16:40:24.392 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:25 compute-0 podman[239931]: 2026-01-26 16:40:25.196463653 +0000 UTC m=+0.079401685 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 16:40:27 compute-0 podman[239949]: 2026-01-26 16:40:27.220297848 +0000 UTC m=+0.091365172 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:40:28 compute-0 nova_compute[185389]: 2026-01-26 16:40:28.001 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:29 compute-0 podman[239969]: 2026-01-26 16:40:29.234576832 +0000 UTC m=+0.094106787 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Jan 26 16:40:29 compute-0 podman[239968]: 2026-01-26 16:40:29.294428983 +0000 UTC m=+0.164845564 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:40:29 compute-0 nova_compute[185389]: 2026-01-26 16:40:29.395 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:29 compute-0 podman[201244]: time="2026-01-26T16:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:40:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:40:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.335 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.336 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.352 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.357 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 16:40:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:31.360 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2ee04f75-dc75-489c-85b5-19cd6d573bf1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 16:40:31 compute-0 openstack_network_exporter[204387]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:40:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:40:31 compute-0 openstack_network_exporter[204387]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:40:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.731 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 26 Jan 2026 16:40:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4f0bb4fc-e551-4cf4-b27f-3ee77d020c5e x-openstack-request-id: req-4f0bb4fc-e551-4cf4-b27f-3ee77d020c5e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.732 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2ee04f75-dc75-489c-85b5-19cd6d573bf1", "name": "vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe", "status": "ACTIVE", "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "user_id": "3c0ab9326d69400aa6a4a91432885d7f", "metadata": {"metering.server_group": "06b33269-d1c6-4fb9-a44b-be304982a550"}, "hostId": "5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b", "image": {"id": "718285d9-0264-40f4-9fb3-d2faff180284", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/718285d9-0264-40f4-9fb3-d2faff180284"}]}, "flavor": {"id": "c2a8df4d-a1d7-42a3-8279-8c7de8a1a662", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c2a8df4d-a1d7-42a3-8279-8c7de8a1a662"}]}, "created": "2026-01-26T16:38:44Z", "updated": "2026-01-26T16:38:55Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.173", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:65:38:01"}, {"version": 4, "addr": "192.168.122.200", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:65:38:01"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2ee04f75-dc75-489c-85b5-19cd6d573bf1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2ee04f75-dc75-489c-85b5-19cd6d573bf1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-26T16:38:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.732 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2ee04f75-dc75-489c-85b5-19cd6d573bf1 used request id req-4f0bb4fc-e551-4cf4-b27f-3ee77d020c5e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.733 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'name': 'vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.734 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:40:32.735287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.804 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.805 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.805 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.874 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.875 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.875 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.876 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.878 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:40:32.877385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.878 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.878 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.879 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 1477823991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.879 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 10680310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.879 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.880 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:40:32.881497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.882 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.882 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.882 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.883 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.883 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.884 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.885 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:40:32.886166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.891 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.895 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2ee04f75-dc75-489c-85b5-19cd6d573bf1 / tap5e252863-18 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.895 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.896 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T16:40:32.897371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.898 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.898 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe>]
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.898 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:40:32.899735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.925 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 35160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.954 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/cpu volume: 46760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.957 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:40:32.956602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.957 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:40:32.959280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:40:32.961339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.962 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.962 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.963 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:40:32.963922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.964 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.964 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes volume: 4690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.965 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.966 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:40:32.966433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.968 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:40:32.968773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.969 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.969 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.970 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:40:32.971588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.972 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.972 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.973 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T16:40:32.974359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.975 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.975 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe>]
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.975 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:40:32.976491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.977 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.977 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.978 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:40:32.979076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.979 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.980 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.981 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:40:32.981627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.982 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.982 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.983 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:40:32.984453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.985 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.985 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.986 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:40:32.987336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.988 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.988 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:32.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:40:32.989940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 nova_compute[185389]: 2026-01-26 16:40:33.006 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.021 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.021 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.022 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.058 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.059 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.060 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:40:33.064298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.065 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.066 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.067 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.068 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.068 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.069 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:40:33.073912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.075 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.076 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.077 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.078 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.078 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.079 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:40:33.082875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.083 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.084 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.084 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.085 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 489623248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.086 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 79957548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.086 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 54491661 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.088 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:40:33.089164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.090 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.090 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.091 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:40:33.093237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.094 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.094 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.095 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.095 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.096 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.096 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:40:33.099010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.099 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.100 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.101 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.101 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.102 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.102 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:40:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:40:34 compute-0 nova_compute[185389]: 2026-01-26 16:40:34.403 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:34 compute-0 nova_compute[185389]: 2026-01-26 16:40:34.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:35 compute-0 nova_compute[185389]: 2026-01-26 16:40:35.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:35 compute-0 nova_compute[185389]: 2026-01-26 16:40:35.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:40:36 compute-0 nova_compute[185389]: 2026-01-26 16:40:36.686 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:40:36 compute-0 nova_compute[185389]: 2026-01-26 16:40:36.687 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:40:36 compute-0 nova_compute[185389]: 2026-01-26 16:40:36.688 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:40:37 compute-0 podman[240013]: 2026-01-26 16:40:37.228509735 +0000 UTC m=+0.100035008 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.013 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.326 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.350 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.351 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.352 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.353 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.353 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.356 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.357 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.357 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.386 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.386 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.386 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.387 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.694 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.770 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.772 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.853 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.855 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.919 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.921 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:38 compute-0 nova_compute[185389]: 2026-01-26 16:40:38.987 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.004 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.096 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.098 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.188 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.190 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.277 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.282 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.356 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.405 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.781 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.783 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5057MB free_disk=72.40328598022461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.935 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.936 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.937 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:40:39 compute-0 nova_compute[185389]: 2026-01-26 16:40:39.937 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:40:40 compute-0 nova_compute[185389]: 2026-01-26 16:40:40.035 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:40:40 compute-0 nova_compute[185389]: 2026-01-26 16:40:40.089 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:40:40 compute-0 nova_compute[185389]: 2026-01-26 16:40:40.093 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:40:40 compute-0 nova_compute[185389]: 2026-01-26 16:40:40.094 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:43 compute-0 nova_compute[185389]: 2026-01-26 16:40:43.019 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:43 compute-0 nova_compute[185389]: 2026-01-26 16:40:43.089 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:43 compute-0 nova_compute[185389]: 2026-01-26 16:40:43.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:44 compute-0 nova_compute[185389]: 2026-01-26 16:40:44.407 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:44 compute-0 podman[240062]: 2026-01-26 16:40:44.788421948 +0000 UTC m=+0.093623224 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Jan 26 16:40:47 compute-0 podman[240085]: 2026-01-26 16:40:47.23888782 +0000 UTC m=+0.132885813 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:40:48 compute-0 nova_compute[185389]: 2026-01-26 16:40:48.025 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:49 compute-0 nova_compute[185389]: 2026-01-26 16:40:49.409 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:53 compute-0 nova_compute[185389]: 2026-01-26 16:40:53.027 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:53 compute-0 podman[240103]: 2026-01-26 16:40:53.197905958 +0000 UTC m=+0.086268863 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:40:54 compute-0 nova_compute[185389]: 2026-01-26 16:40:54.414 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:56 compute-0 podman[240128]: 2026-01-26 16:40:56.206482473 +0000 UTC m=+0.100855240 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:40:56 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 16:40:58 compute-0 nova_compute[185389]: 2026-01-26 16:40:58.033 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:58 compute-0 podman[240146]: 2026-01-26 16:40:58.17831081 +0000 UTC m=+0.071807539 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:40:59 compute-0 nova_compute[185389]: 2026-01-26 16:40:59.418 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:59 compute-0 podman[201244]: time="2026-01-26T16:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:40:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:40:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 26 16:41:00 compute-0 podman[240167]: 2026-01-26 16:41:00.198172596 +0000 UTC m=+0.081786951 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 26 16:41:00 compute-0 podman[240166]: 2026-01-26 16:41:00.215011785 +0000 UTC m=+0.107078580 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 26 16:41:01 compute-0 openstack_network_exporter[204387]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:41:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:41:01 compute-0 openstack_network_exporter[204387]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:41:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:41:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:41:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:41:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:41:01.720 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:03 compute-0 nova_compute[185389]: 2026-01-26 16:41:03.037 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:04 compute-0 nova_compute[185389]: 2026-01-26 16:41:04.419 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:08 compute-0 nova_compute[185389]: 2026-01-26 16:41:08.042 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:08 compute-0 podman[240215]: 2026-01-26 16:41:08.206349077 +0000 UTC m=+0.081520483 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:41:09 compute-0 nova_compute[185389]: 2026-01-26 16:41:09.424 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:13 compute-0 nova_compute[185389]: 2026-01-26 16:41:13.047 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:14 compute-0 nova_compute[185389]: 2026-01-26 16:41:14.428 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:15 compute-0 podman[240238]: 2026-01-26 16:41:15.217935924 +0000 UTC m=+0.097636013 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 26 16:41:18 compute-0 nova_compute[185389]: 2026-01-26 16:41:18.053 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:18 compute-0 podman[240259]: 2026-01-26 16:41:18.212446546 +0000 UTC m=+0.103095681 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 16:41:19 compute-0 nova_compute[185389]: 2026-01-26 16:41:19.430 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:23 compute-0 nova_compute[185389]: 2026-01-26 16:41:23.058 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:24 compute-0 podman[240280]: 2026-01-26 16:41:24.168453141 +0000 UTC m=+0.062158656 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:41:24 compute-0 nova_compute[185389]: 2026-01-26 16:41:24.430 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:27 compute-0 podman[240305]: 2026-01-26 16:41:27.226892536 +0000 UTC m=+0.111018828 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 16:41:28 compute-0 nova_compute[185389]: 2026-01-26 16:41:28.063 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:29 compute-0 podman[240323]: 2026-01-26 16:41:29.218335777 +0000 UTC m=+0.088195516 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 16:41:29 compute-0 nova_compute[185389]: 2026-01-26 16:41:29.432 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:29 compute-0 podman[201244]: time="2026-01-26T16:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:41:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:41:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 26 16:41:31 compute-0 podman[240343]: 2026-01-26 16:41:31.214550758 +0000 UTC m=+0.089190602 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container)
Jan 26 16:41:31 compute-0 podman[240342]: 2026-01-26 16:41:31.265930019 +0000 UTC m=+0.133033118 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:41:31 compute-0 openstack_network_exporter[204387]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:41:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:41:31 compute-0 openstack_network_exporter[204387]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:41:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:41:33 compute-0 nova_compute[185389]: 2026-01-26 16:41:33.065 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:34 compute-0 nova_compute[185389]: 2026-01-26 16:41:34.435 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:35 compute-0 nova_compute[185389]: 2026-01-26 16:41:35.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:35 compute-0 nova_compute[185389]: 2026-01-26 16:41:35.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:41:35 compute-0 nova_compute[185389]: 2026-01-26 16:41:35.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:41:38 compute-0 nova_compute[185389]: 2026-01-26 16:41:38.070 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:38 compute-0 nova_compute[185389]: 2026-01-26 16:41:38.250 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:41:38 compute-0 nova_compute[185389]: 2026-01-26 16:41:38.251 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:41:38 compute-0 nova_compute[185389]: 2026-01-26 16:41:38.252 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:41:38 compute-0 nova_compute[185389]: 2026-01-26 16:41:38.252 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:41:39 compute-0 podman[240387]: 2026-01-26 16:41:39.235514288 +0000 UTC m=+0.123361363 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:41:39 compute-0 nova_compute[185389]: 2026-01-26 16:41:39.438 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.641 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.708 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.709 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.709 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.710 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.711 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.711 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.712 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.713 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.713 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.746 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.747 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.748 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.846 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.931 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:40 compute-0 nova_compute[185389]: 2026-01-26 16:41:40.941 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.043 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.044 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.142 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.144 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.232 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.254 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.331 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.332 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.429 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.431 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.538 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.550 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.617 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.984 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.986 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=72.40328598022461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.987 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:41 compute-0 nova_compute[185389]: 2026-01-26 16:41:41.987 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.208 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.209 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.209 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.210 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.316 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.362 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.365 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:41:42 compute-0 nova_compute[185389]: 2026-01-26 16:41:42.366 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:43 compute-0 nova_compute[185389]: 2026-01-26 16:41:43.074 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:44 compute-0 nova_compute[185389]: 2026-01-26 16:41:44.441 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:45 compute-0 nova_compute[185389]: 2026-01-26 16:41:45.362 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:45 compute-0 nova_compute[185389]: 2026-01-26 16:41:45.363 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:45 compute-0 nova_compute[185389]: 2026-01-26 16:41:45.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:46 compute-0 podman[240432]: 2026-01-26 16:41:46.236063423 +0000 UTC m=+0.121164403 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, architecture=x86_64)
Jan 26 16:41:48 compute-0 nova_compute[185389]: 2026-01-26 16:41:48.080 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:49 compute-0 podman[240453]: 2026-01-26 16:41:49.170763476 +0000 UTC m=+0.062643308 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:41:49 compute-0 nova_compute[185389]: 2026-01-26 16:41:49.445 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:53 compute-0 nova_compute[185389]: 2026-01-26 16:41:53.086 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:54 compute-0 nova_compute[185389]: 2026-01-26 16:41:54.446 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:55 compute-0 podman[240472]: 2026-01-26 16:41:55.208925689 +0000 UTC m=+0.096092031 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:41:58 compute-0 nova_compute[185389]: 2026-01-26 16:41:58.091 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:58 compute-0 podman[240496]: 2026-01-26 16:41:58.241797857 +0000 UTC m=+0.126798177 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:41:59 compute-0 nova_compute[185389]: 2026-01-26 16:41:59.448 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:59 compute-0 podman[201244]: time="2026-01-26T16:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:41:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:41:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Jan 26 16:42:00 compute-0 podman[240515]: 2026-01-26 16:42:00.299052932 +0000 UTC m=+0.165889213 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:42:01 compute-0 openstack_network_exporter[204387]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:42:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:42:01 compute-0 openstack_network_exporter[204387]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:42:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:42:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:42:01.719 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:42:01.720 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:42:01.721 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:02 compute-0 podman[240535]: 2026-01-26 16:42:02.261498403 +0000 UTC m=+0.141977671 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:42:02 compute-0 podman[240536]: 2026-01-26 16:42:02.264424063 +0000 UTC m=+0.131675751 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:42:03 compute-0 nova_compute[185389]: 2026-01-26 16:42:03.096 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:04 compute-0 nova_compute[185389]: 2026-01-26 16:42:04.450 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:08 compute-0 nova_compute[185389]: 2026-01-26 16:42:08.102 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:09 compute-0 nova_compute[185389]: 2026-01-26 16:42:09.453 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:10 compute-0 podman[240581]: 2026-01-26 16:42:10.272350617 +0000 UTC m=+0.135333500 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:42:13 compute-0 nova_compute[185389]: 2026-01-26 16:42:13.109 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:14 compute-0 nova_compute[185389]: 2026-01-26 16:42:14.459 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:17 compute-0 podman[240605]: 2026-01-26 16:42:17.241853057 +0000 UTC m=+0.121077312 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, version=9.6, vcs-type=git, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible)
Jan 26 16:42:18 compute-0 nova_compute[185389]: 2026-01-26 16:42:18.114 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:19 compute-0 nova_compute[185389]: 2026-01-26 16:42:19.458 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:20 compute-0 podman[240626]: 2026-01-26 16:42:20.256669513 +0000 UTC m=+0.128120094 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_managed=true)
Jan 26 16:42:23 compute-0 nova_compute[185389]: 2026-01-26 16:42:23.118 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:24 compute-0 nova_compute[185389]: 2026-01-26 16:42:24.462 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:26 compute-0 podman[240646]: 2026-01-26 16:42:26.253828498 +0000 UTC m=+0.122765316 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:42:28 compute-0 nova_compute[185389]: 2026-01-26 16:42:28.123 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:29 compute-0 podman[240670]: 2026-01-26 16:42:29.244078875 +0000 UTC m=+0.119912770 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 16:42:29 compute-0 nova_compute[185389]: 2026-01-26 16:42:29.466 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:29 compute-0 podman[201244]: time="2026-01-26T16:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:42:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:42:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 26 16:42:31 compute-0 podman[240687]: 2026-01-26 16:42:31.236283877 +0000 UTC m=+0.103914044 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.337 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.337 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.349 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.358 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'name': 'vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:42:31.360353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 openstack_network_exporter[204387]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:42:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:42:31 compute-0 openstack_network_exporter[204387]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:42:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.473 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.474 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.474 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.564 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.565 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.565 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.567 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.567 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.567 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.567 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 1485318056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.568 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 10680310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.568 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.570 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.570 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.570 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.570 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:42:31.566900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:42:31.569359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:42:31.571673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.582 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.587 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.587 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.588 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:42:31.588383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.613 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 36760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.649 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/cpu volume: 165170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.651 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.654 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.655 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:42:31.651372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:42:31.652761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.656 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:42:31.655078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.658 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.658 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes volume: 4760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:42:31.657771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.660 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.661 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:42:31.659584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:42:31.660786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:42:31.662740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.663 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.664 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.665 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.666 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.667 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.668 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.669 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:42:31.664777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:42:31.666344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:42:31.667779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:42:31.669308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.671 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.671 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:42:31.670909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:42:31.672669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.711 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.712 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.712 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.756 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.756 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.757 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.759 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.759 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:42:31.759268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.760 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.760 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.761 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.761 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.762 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.763 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.763 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.764 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.764 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.765 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.765 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.766 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.766 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.767 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.768 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.768 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:42:31.764096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.769 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:42:31.769283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.771 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 489623248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.771 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 79957548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.772 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 54491661 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.773 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:42:31.774060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.774 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.775 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.777 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:42:31.776718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.778 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.778 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.778 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.779 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.779 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.781 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.781 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.782 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.782 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:42:31.781609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.783 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.784 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.784 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:42:31.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:42:33 compute-0 nova_compute[185389]: 2026-01-26 16:42:33.128 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:33 compute-0 podman[240708]: 2026-01-26 16:42:33.254126607 +0000 UTC m=+0.115819937 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release-0.7.12=, config_id=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 16:42:33 compute-0 podman[240707]: 2026-01-26 16:42:33.335001342 +0000 UTC m=+0.204479545 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:42:34 compute-0 nova_compute[185389]: 2026-01-26 16:42:34.467 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:34 compute-0 nova_compute[185389]: 2026-01-26 16:42:34.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:34 compute-0 nova_compute[185389]: 2026-01-26 16:42:34.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:42:36 compute-0 nova_compute[185389]: 2026-01-26 16:42:36.757 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:36 compute-0 nova_compute[185389]: 2026-01-26 16:42:36.759 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:42:37 compute-0 nova_compute[185389]: 2026-01-26 16:42:37.293 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:42:37 compute-0 nova_compute[185389]: 2026-01-26 16:42:37.294 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:42:37 compute-0 nova_compute[185389]: 2026-01-26 16:42:37.295 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.133 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.845 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.878 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.880 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.881 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.882 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.883 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.885 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.886 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:42:38 compute-0 nova_compute[185389]: 2026-01-26 16:42:38.912 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.469 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.723 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.724 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.725 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.759 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:42:39 compute-0 nova_compute[185389]: 2026-01-26 16:42:39.979 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.090 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.092 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.170 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.173 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.237 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.242 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.325 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.333 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.392 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.394 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.492 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.496 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.586 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.587 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:40 compute-0 nova_compute[185389]: 2026-01-26 16:42:40.676 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.075 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.078 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.40326690673828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.079 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.079 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:41 compute-0 podman[240775]: 2026-01-26 16:42:41.223688896 +0000 UTC m=+0.099440931 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.333 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.335 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.336 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.336 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.410 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.484 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.484 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.515 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.535 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.601 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.676 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.686 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.689 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:41 compute-0 nova_compute[185389]: 2026-01-26 16:42:41.690 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:42 compute-0 nova_compute[185389]: 2026-01-26 16:42:42.849 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:43 compute-0 nova_compute[185389]: 2026-01-26 16:42:43.144 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:44 compute-0 nova_compute[185389]: 2026-01-26 16:42:44.471 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:45 compute-0 nova_compute[185389]: 2026-01-26 16:42:45.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:48 compute-0 nova_compute[185389]: 2026-01-26 16:42:48.149 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:48 compute-0 podman[240798]: 2026-01-26 16:42:48.254183169 +0000 UTC m=+0.123424155 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.6, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Jan 26 16:42:49 compute-0 nova_compute[185389]: 2026-01-26 16:42:49.475 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:51 compute-0 podman[240820]: 2026-01-26 16:42:51.198235726 +0000 UTC m=+0.079449047 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 16:42:53 compute-0 nova_compute[185389]: 2026-01-26 16:42:53.154 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:54 compute-0 nova_compute[185389]: 2026-01-26 16:42:54.479 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:57 compute-0 podman[240841]: 2026-01-26 16:42:57.238442775 +0000 UTC m=+0.103279836 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:42:58 compute-0 nova_compute[185389]: 2026-01-26 16:42:58.161 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:59 compute-0 nova_compute[185389]: 2026-01-26 16:42:59.481 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:59 compute-0 podman[201244]: time="2026-01-26T16:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:42:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:42:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 26 16:43:00 compute-0 podman[240865]: 2026-01-26 16:43:00.208879841 +0000 UTC m=+0.093462208 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:43:01 compute-0 openstack_network_exporter[204387]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:43:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:43:01 compute-0 openstack_network_exporter[204387]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:43:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:43:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:01.721 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:01.724 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:01.726 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:02 compute-0 podman[240884]: 2026-01-26 16:43:02.24166579 +0000 UTC m=+0.115121019 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:43:03 compute-0 nova_compute[185389]: 2026-01-26 16:43:03.167 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:04 compute-0 podman[240903]: 2026-01-26 16:43:04.225575545 +0000 UTC m=+0.103235425 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, container_name=kepler, config_id=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:43:04 compute-0 podman[240902]: 2026-01-26 16:43:04.319823644 +0000 UTC m=+0.208730101 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:43:04 compute-0 nova_compute[185389]: 2026-01-26 16:43:04.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:08 compute-0 nova_compute[185389]: 2026-01-26 16:43:08.178 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:09 compute-0 nova_compute[185389]: 2026-01-26 16:43:09.484 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:12 compute-0 podman[240947]: 2026-01-26 16:43:12.233631712 +0000 UTC m=+0.108317254 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:43:13 compute-0 nova_compute[185389]: 2026-01-26 16:43:13.184 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:14 compute-0 nova_compute[185389]: 2026-01-26 16:43:14.488 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:18 compute-0 nova_compute[185389]: 2026-01-26 16:43:18.191 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:19 compute-0 podman[240971]: 2026-01-26 16:43:19.191043752 +0000 UTC m=+0.080777353 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 26 16:43:19 compute-0 nova_compute[185389]: 2026-01-26 16:43:19.491 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:22 compute-0 podman[240992]: 2026-01-26 16:43:22.23386314 +0000 UTC m=+0.116121765 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:43:23 compute-0 nova_compute[185389]: 2026-01-26 16:43:23.195 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:24 compute-0 nova_compute[185389]: 2026-01-26 16:43:24.494 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:28 compute-0 nova_compute[185389]: 2026-01-26 16:43:28.200 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:28 compute-0 podman[241010]: 2026-01-26 16:43:28.227558633 +0000 UTC m=+0.091396753 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:43:29 compute-0 nova_compute[185389]: 2026-01-26 16:43:29.501 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:29 compute-0 podman[201244]: time="2026-01-26T16:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:43:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:43:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Jan 26 16:43:31 compute-0 podman[241034]: 2026-01-26 16:43:31.232461199 +0000 UTC m=+0.117022651 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:43:31 compute-0 openstack_network_exporter[204387]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:43:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:43:31 compute-0 openstack_network_exporter[204387]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:43:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:43:33 compute-0 nova_compute[185389]: 2026-01-26 16:43:33.205 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:33 compute-0 podman[241053]: 2026-01-26 16:43:33.218486262 +0000 UTC m=+0.087684341 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:43:34 compute-0 nova_compute[185389]: 2026-01-26 16:43:34.511 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:35 compute-0 podman[241074]: 2026-01-26 16:43:35.204221647 +0000 UTC m=+0.085104880 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Jan 26 16:43:35 compute-0 podman[241073]: 2026-01-26 16:43:35.231572253 +0000 UTC m=+0.119826897 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 26 16:43:37 compute-0 nova_compute[185389]: 2026-01-26 16:43:37.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:37 compute-0 nova_compute[185389]: 2026-01-26 16:43:37.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:43:38 compute-0 nova_compute[185389]: 2026-01-26 16:43:38.191 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:38.195 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:43:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:38.197 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:43:38 compute-0 nova_compute[185389]: 2026-01-26 16:43:38.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:38 compute-0 nova_compute[185389]: 2026-01-26 16:43:38.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:38 compute-0 nova_compute[185389]: 2026-01-26 16:43:38.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:43:38 compute-0 nova_compute[185389]: 2026-01-26 16:43:38.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:43:39 compute-0 nova_compute[185389]: 2026-01-26 16:43:39.421 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:43:39 compute-0 nova_compute[185389]: 2026-01-26 16:43:39.422 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:43:39 compute-0 nova_compute[185389]: 2026-01-26 16:43:39.422 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:43:39 compute-0 nova_compute[185389]: 2026-01-26 16:43:39.422 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:43:39 compute-0 nova_compute[185389]: 2026-01-26 16:43:39.512 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:41 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:41.201 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.021 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.210 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:43 compute-0 podman[241114]: 2026-01-26 16:43:43.230420911 +0000 UTC m=+0.109192918 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.327 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.327 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.328 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.328 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.328 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.328 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:43 compute-0 nova_compute[185389]: 2026-01-26 16:43:43.329 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.388 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.388 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.389 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.389 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.517 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.529 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.638 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.640 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.729 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.736 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.799 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.801 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.866 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.878 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.944 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:44 compute-0 nova_compute[185389]: 2026-01-26 16:43:44.947 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.047 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.058 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.147 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.149 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.211 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.623 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.625 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.40341567993164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.625 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:45 compute-0 nova_compute[185389]: 2026-01-26 16:43:45.626 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.287 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.288 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.289 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.289 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.367 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.415 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.418 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:43:46 compute-0 nova_compute[185389]: 2026-01-26 16:43:46.418 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:47 compute-0 nova_compute[185389]: 2026-01-26 16:43:47.413 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:47 compute-0 nova_compute[185389]: 2026-01-26 16:43:47.413 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:47 compute-0 nova_compute[185389]: 2026-01-26 16:43:47.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:48 compute-0 nova_compute[185389]: 2026-01-26 16:43:48.214 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.419 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.422 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.457 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.518 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.547 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.548 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.564 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.565 185393 INFO nova.compute.claims [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.769 185393 DEBUG nova.compute.provider_tree [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.802 185393 DEBUG nova.scheduler.client.report [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.831 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.832 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.880 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.881 185393 DEBUG nova.network.neutron [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.908 185393 INFO nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:43:49 compute-0 nova_compute[185389]: 2026-01-26 16:43:49.944 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.053 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.068 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.069 185393 INFO nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Creating image(s)
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.070 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.070 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.071 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.098 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.208 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.209 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.210 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:50 compute-0 podman[241164]: 2026-01-26 16:43:50.223790211 +0000 UTC m=+0.108010546 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container)
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.232 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.294 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.295 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.353 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.354 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.355 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.442 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.444 185393 DEBUG nova.virt.disk.api [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.445 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.509 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.510 185393 DEBUG nova.virt.disk.api [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.511 185393 DEBUG nova.objects.instance [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid 4243720a-45ff-439a-9753-a7da419082b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.759 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.760 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.761 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.781 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.865 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.866 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.866 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.878 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.968 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:50 compute-0 nova_compute[185389]: 2026-01-26 16:43:50.970 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.025 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.eph0 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.026 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.027 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.090 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.091 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.091 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Ensure instance console log exists: /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.091 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.092 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.092 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.716 185393 DEBUG nova.network.neutron [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Successfully updated port: 7839c0a2-ac0b-4c45-8c81-670b0c2a638c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.735 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.736 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.736 185393 DEBUG nova.network.neutron [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.751 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.752 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.782 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.855 185393 DEBUG nova.compute.manager [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-changed-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.856 185393 DEBUG nova.compute.manager [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Refreshing instance network info cache due to event network-changed-7839c0a2-ac0b-4c45-8c81-670b0c2a638c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.856 185393 DEBUG oslo_concurrency.lockutils [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.905 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.906 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.916 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.916 185393 INFO nova.compute.claims [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:43:51 compute-0 nova_compute[185389]: 2026-01-26 16:43:51.937 185393 DEBUG nova.network.neutron [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.101 185393 DEBUG nova.compute.provider_tree [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.119 185393 DEBUG nova.scheduler.client.report [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.146 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.146 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.206 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.206 185393 DEBUG nova.network.neutron [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.233 185393 INFO nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.274 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.573 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.574 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.575 185393 INFO nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Creating image(s)
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.576 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.576 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.578 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.607 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.707 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.709 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.710 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.735 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.830 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.832 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.902 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk 1073741824" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.904 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.904 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:52 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.998 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:52.999 185393 DEBUG nova.virt.disk.api [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.000 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.109 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.111 185393 DEBUG nova.virt.disk.api [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.111 185393 DEBUG nova.objects.instance [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid a2578f61-3f19-40f4-a32f-97cf22569550 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.129 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.130 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.131 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.168 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.240 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.241 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.242 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.258 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:53 compute-0 podman[241225]: 2026-01-26 16:43:53.284619151 +0000 UTC m=+0.156126397 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.322 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.323 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.371 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.372 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.372 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.464 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.465 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.465 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Ensure instance console log exists: /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.466 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.466 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:53 compute-0 nova_compute[185389]: 2026-01-26 16:43:53.467 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:54 compute-0 nova_compute[185389]: 2026-01-26 16:43:54.524 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.913 185393 DEBUG nova.network.neutron [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Updating instance_info_cache with network_info: [{"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.937 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.937 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Instance network_info: |[{"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.938 185393 DEBUG oslo_concurrency.lockutils [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.938 185393 DEBUG nova.network.neutron [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Refreshing network info cache for port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.945 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Start _get_guest_xml network_info=[{"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '718285d9-0264-40f4-9fb3-d2faff180284'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.958 185393 WARNING nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.968 185393 DEBUG nova.virt.libvirt.host [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.969 185393 DEBUG nova.virt.libvirt.host [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.975 185393 DEBUG nova.virt.libvirt.host [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.976 185393 DEBUG nova.virt.libvirt.host [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.976 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.977 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T16:35:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c2a8df4d-a1d7-42a3-8279-8c7de8a1a662',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.977 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.977 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.978 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.979 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.979 185393 DEBUG nova.virt.hardware [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.983 185393 DEBUG nova.virt.libvirt.vif [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',id=3,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-dy056y89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:43:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDg3MTgyNDkyOTk0MDM2OTA3MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 26 16:43:55 compute-0 nova_compute[185389]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDg3MTgyNDkyOTk0MDM2OTA3MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=4243720a-45ff-439a-9753-a7da419082b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.983 185393 DEBUG nova.network.os_vif_util [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.984 185393 DEBUG nova.network.os_vif_util [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.985 185393 DEBUG nova.objects.instance [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4243720a-45ff-439a-9753-a7da419082b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:43:55 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.997 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <uuid>4243720a-45ff-439a-9753-a7da419082b2</uuid>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <name>instance-00000003</name>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <metadata>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:name>vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6</nova:name>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 16:43:55</nova:creationTime>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:flavor name="m1.small">
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="718285d9-0264-40f4-9fb3-d2faff180284"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         <nova:port uuid="7839c0a2-ac0b-4c45-8c81-670b0c2a638c">
Jan 26 16:43:55 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="192.168.0.104" ipVersion="4"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </metadata>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <system>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="serial">4243720a-45ff-439a-9753-a7da419082b2</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="uuid">4243720a-45ff-439a-9753-a7da419082b2</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </system>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <os>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </os>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <features>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <apic/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </features>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </clock>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.eph0"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.config"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:ed:33:15"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <target dev="tap7839c0a2-ac"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/console.log" append="off"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </serial>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <video>
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </video>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 16:43:55 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 16:43:55 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 16:43:55 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:43:55 compute-0 nova_compute[185389]: </domain>
Jan 26 16:43:55 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.998 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Preparing to wait for external event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.998 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.998 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.998 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.999 185393 DEBUG nova.virt.libvirt.vif [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',id=3,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-dy056y89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:43:49Z,user_data='Content-Type: multipart/mixed; boundary="===============0871824929940369070=="
MIME-Version: 1.0

--===============0871824929940369070==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============0871824929940369070==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============0871824929940369070==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============0871824929940369070==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============0871824929940369070==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============0871824929940369070==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============0871824929940369070==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============0871824929940369070==--
',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=4243720a-45ff-439a-9753-a7da419082b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:55.999 185393 DEBUG nova.network.os_vif_util [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.000 185393 DEBUG nova.network.os_vif_util [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.000 185393 DEBUG os_vif [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.000 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.001 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.001 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.007 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.007 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7839c0a2-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.008 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7839c0a2-ac, col_values=(('external_ids', {'iface-id': '7839c0a2-ac0b-4c45-8c81-670b0c2a638c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:33:15', 'vm-uuid': '4243720a-45ff-439a-9753-a7da419082b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.009 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:56 compute-0 NetworkManager[56253]: <info>  [1769445836.0124] manager: (tap7839c0a2-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.014 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.027 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.029 185393 INFO os_vif [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac')
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.095 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.096 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.096 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.096 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No VIF found with MAC fa:16:3e:ed:33:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:43:56 compute-0 nova_compute[185389]: 2026-01-26 16:43:56.097 185393 INFO nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Using config drive
Jan 26 16:43:56 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:43:55.983 185393 DEBUG nova.virt.libvirt.vif [None req-0d94e491-5a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.427 185393 INFO nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Creating config drive at /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.config
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.434 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy1y7r5we execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.564 185393 DEBUG oslo_concurrency.processutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy1y7r5we" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:57 compute-0 kernel: tap7839c0a2-ac: entered promiscuous mode
Jan 26 16:43:57 compute-0 NetworkManager[56253]: <info>  [1769445837.6678] manager: (tap7839c0a2-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.671 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 ovn_controller[97699]: 2026-01-26T16:43:57Z|00040|binding|INFO|Claiming lport 7839c0a2-ac0b-4c45-8c81-670b0c2a638c for this chassis.
Jan 26 16:43:57 compute-0 ovn_controller[97699]: 2026-01-26T16:43:57Z|00041|binding|INFO|7839c0a2-ac0b-4c45-8c81-670b0c2a638c: Claiming fa:16:3e:ed:33:15 192.168.0.104
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.675 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 ovn_controller[97699]: 2026-01-26T16:43:57Z|00042|binding|INFO|Setting lport 7839c0a2-ac0b-4c45-8c81-670b0c2a638c ovn-installed in OVS
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.694 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.699 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 ovn_controller[97699]: 2026-01-26T16:43:57Z|00043|binding|INFO|Setting lport 7839c0a2-ac0b-4c45-8c81-670b0c2a638c up in Southbound
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.710 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:33:15 192.168.0.104'], port_security=['fa:16:3e:ed:33:15 192.168.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-pfog52jhyvfv-755q4k2a5zn5-port-hcyclvuzjcab', 'neutron:cidrs': '192.168.0.104/24', 'neutron:device_id': '4243720a-45ff-439a-9753-a7da419082b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-pfog52jhyvfv-755q4k2a5zn5-port-hcyclvuzjcab', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=7839c0a2-ac0b-4c45-8c81-670b0c2a638c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.712 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe bound to our chassis
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.714 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:43:57 compute-0 systemd-machined[156679]: New machine qemu-3-instance-00000003.
Jan 26 16:43:57 compute-0 systemd-udevd[241282]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:43:57 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.748 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e37c9f1b-1019-4b7e-bde0-9d3a6c014d04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 NetworkManager[56253]: <info>  [1769445837.7633] device (tap7839c0a2-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:43:57 compute-0 NetworkManager[56253]: <info>  [1769445837.7671] device (tap7839c0a2-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.806 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf8b5b4-93c2-435a-bb05-b01c76197021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.810 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2de1a36-3384-472e-b1c6-5b905e0edc75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.849 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[767432ec-c4db-4b89-a99b-bade2605a6d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.871 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[502f8104-4136-4ca8-81a7-1134746bfbec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 40653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241295, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.886 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9153883a-331a-488c-8f80-cd54cfd9c544]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241296, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241296, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.889 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.892 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 nova_compute[185389]: 2026-01-26 16:43:57.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.897 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.897 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.898 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:43:57 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:43:57.899 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.783 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445838.783122, 4243720a-45ff-439a-9753-a7da419082b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.784 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] VM Started (Lifecycle Event)
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.812 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.822 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445838.7832954, 4243720a-45ff-439a-9753-a7da419082b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.822 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] VM Paused (Lifecycle Event)
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.836 185393 DEBUG nova.network.neutron [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Successfully updated port: 58a644b5-e3a2-4838-9216-8540447cf0a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.848 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.850 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.850 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.851 185393 DEBUG nova.network.neutron [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.857 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:43:58 compute-0 nova_compute[185389]: 2026-01-26 16:43:58.882 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:43:59 compute-0 podman[241304]: 2026-01-26 16:43:59.253751091 +0000 UTC m=+0.121844772 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.526 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.685 185393 DEBUG nova.network.neutron [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:43:59 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 16:43:59 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 16:43:59 compute-0 podman[201244]: time="2026-01-26T16:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:43:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:43:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.965 185393 DEBUG nova.compute.manager [req-c264681c-83fd-4016-b74a-8890ee59b282 req-85c7075d-92a7-413e-8496-c78aed79f6cb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.966 185393 DEBUG oslo_concurrency.lockutils [req-c264681c-83fd-4016-b74a-8890ee59b282 req-85c7075d-92a7-413e-8496-c78aed79f6cb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.966 185393 DEBUG oslo_concurrency.lockutils [req-c264681c-83fd-4016-b74a-8890ee59b282 req-85c7075d-92a7-413e-8496-c78aed79f6cb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.967 185393 DEBUG oslo_concurrency.lockutils [req-c264681c-83fd-4016-b74a-8890ee59b282 req-85c7075d-92a7-413e-8496-c78aed79f6cb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.967 185393 DEBUG nova.compute.manager [req-c264681c-83fd-4016-b74a-8890ee59b282 req-85c7075d-92a7-413e-8496-c78aed79f6cb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Processing event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.969 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.981 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.982 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445839.9807591, 4243720a-45ff-439a-9753-a7da419082b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.983 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] VM Resumed (Lifecycle Event)
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.995 185393 INFO nova.virt.libvirt.driver [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Instance spawned successfully.
Jan 26 16:43:59 compute-0 nova_compute[185389]: 2026-01-26 16:43:59.996 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.024 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.035 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.044 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.045 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.046 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.046 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.047 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.048 185393 DEBUG nova.virt.libvirt.driver [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.081 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.118 185393 INFO nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Took 10.05 seconds to spawn the instance on the hypervisor.
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.119 185393 DEBUG nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.382 185393 INFO nova.compute.manager [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Took 10.87 seconds to build instance.
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.403 185393 DEBUG oslo_concurrency.lockutils [None req-0d94e491-5a14-469f-87f4-3d26f19ba3be 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.701 185393 DEBUG nova.network.neutron [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Updated VIF entry in instance network info cache for port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.702 185393 DEBUG nova.network.neutron [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Updating instance_info_cache with network_info: [{"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:00 compute-0 nova_compute[185389]: 2026-01-26 16:44:00.719 185393 DEBUG oslo_concurrency.lockutils [req-d260222f-c943-4e8c-824b-d3db5ae18b1d req-9f0235f1-01f7-44ea-a6bc-fdf4c2c09219 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.010 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:01 compute-0 openstack_network_exporter[204387]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:44:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:44:01 compute-0 openstack_network_exporter[204387]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:44:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:44:01 compute-0 anacron[31011]: Job `cron.weekly' started
Jan 26 16:44:01 compute-0 anacron[31011]: Job `cron.weekly' terminated
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.578 185393 DEBUG nova.network.neutron [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.599 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.600 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance network_info: |[{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.606 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Start _get_guest_xml network_info=[{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '718285d9-0264-40f4-9fb3-d2faff180284'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.617 185393 WARNING nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.627 185393 DEBUG nova.virt.libvirt.host [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.628 185393 DEBUG nova.virt.libvirt.host [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.635 185393 DEBUG nova.virt.libvirt.host [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.636 185393 DEBUG nova.virt.libvirt.host [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.638 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.639 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T16:35:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c2a8df4d-a1d7-42a3-8279-8c7de8a1a662',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.640 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.641 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.642 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.643 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.644 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.646 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.647 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.648 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.649 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.650 185393 DEBUG nova.virt.hardware [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.656 185393 DEBUG nova.virt.libvirt.vif [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',id=4,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-yv3lpxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:43:52Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDA2ODAzOTY5ODExNTU4MDAzOT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 26 16:44:01 compute-0 nova_compute[185389]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDA2ODAzOTY5ODExNTU4MDAzOT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=a2578f61-3f19-40f4-a32f-97cf22569550,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.658 185393 DEBUG nova.network.os_vif_util [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.660 185393 DEBUG nova.network.os_vif_util [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.662 185393 DEBUG nova.objects.instance [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid a2578f61-3f19-40f4-a32f-97cf22569550 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.677 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <uuid>a2578f61-3f19-40f4-a32f-97cf22569550</uuid>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <name>instance-00000004</name>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <metadata>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:name>vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y</nova:name>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 16:44:01</nova:creationTime>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:flavor name="m1.small">
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="718285d9-0264-40f4-9fb3-d2faff180284"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         <nova:port uuid="58a644b5-e3a2-4838-9216-8540447cf0a5">
Jan 26 16:44:01 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="192.168.0.107" ipVersion="4"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </metadata>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <system>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="serial">a2578f61-3f19-40f4-a32f-97cf22569550</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="uuid">a2578f61-3f19-40f4-a32f-97cf22569550</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </system>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <os>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </os>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <features>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <apic/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </features>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </clock>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.config"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:ac:8d:a5"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <target dev="tap58a644b5-e3"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/console.log" append="off"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </serial>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <video>
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </video>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 16:44:01 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 16:44:01 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 16:44:01 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:44:01 compute-0 nova_compute[185389]: </domain>
Jan 26 16:44:01 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:44:01 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:44:01.656 185393 DEBUG nova.virt.libvirt.vif [None req-ebe6b674-cf [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.691 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Preparing to wait for external event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.691 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.691 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.692 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.693 185393 DEBUG nova.virt.libvirt.vif [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',id=4,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-yv3lpxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:43:52Z,user_data='Content-Type: multipart/mixed; boundary="===============0068039698115580039=="
MIME-Version: 1.0

--===============0068039698115580039==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============0068039698115580039==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============0068039698115580039==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============0068039698115580039==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============0068039698115580039==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============0068039698115580039==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============0068039698115580039==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============0068039698115580039==--
',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=a2578f61-3f19-40f4-a32f-97cf22569550,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.693 185393 DEBUG nova.network.os_vif_util [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.694 185393 DEBUG nova.network.os_vif_util [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.694 185393 DEBUG os_vif [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.695 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.697 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.698 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.704 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.705 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58a644b5-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.706 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58a644b5-e3, col_values=(('external_ids', {'iface-id': '58a644b5-e3a2-4838-9216-8540447cf0a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:8d:a5', 'vm-uuid': 'a2578f61-3f19-40f4-a32f-97cf22569550'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.708 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:01 compute-0 NetworkManager[56253]: <info>  [1769445841.7101] manager: (tap58a644b5-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.713 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:44:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:01.722 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:01.722 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:01.723 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.725 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.726 185393 INFO os_vif [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3')
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.807 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.808 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.809 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.809 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No VIF found with MAC fa:16:3e:ac:8d:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:44:01 compute-0 nova_compute[185389]: 2026-01-26 16:44:01.810 185393 INFO nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Using config drive
Jan 26 16:44:01 compute-0 podman[241352]: 2026-01-26 16:44:01.941335757 +0000 UTC m=+0.149403174 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.082 185393 DEBUG nova.compute.manager [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-changed-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.084 185393 DEBUG nova.compute.manager [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Refreshing instance network info cache due to event network-changed-58a644b5-e3a2-4838-9216-8540447cf0a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.086 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.087 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.088 185393 DEBUG nova.network.neutron [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Refreshing network info cache for port 58a644b5-e3a2-4838-9216-8540447cf0a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.668 185393 INFO nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Creating config drive at /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.config
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.682 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp75lr6dcp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.823 185393 DEBUG oslo_concurrency.processutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp75lr6dcp" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:02 compute-0 kernel: tap58a644b5-e3: entered promiscuous mode
Jan 26 16:44:02 compute-0 NetworkManager[56253]: <info>  [1769445842.9165] manager: (tap58a644b5-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 26 16:44:02 compute-0 ovn_controller[97699]: 2026-01-26T16:44:02Z|00044|binding|INFO|Claiming lport 58a644b5-e3a2-4838-9216-8540447cf0a5 for this chassis.
Jan 26 16:44:02 compute-0 ovn_controller[97699]: 2026-01-26T16:44:02Z|00045|binding|INFO|58a644b5-e3a2-4838-9216-8540447cf0a5: Claiming fa:16:3e:ac:8d:a5 192.168.0.107
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.923 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:02.945 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:8d:a5 192.168.0.107'], port_security=['fa:16:3e:ac:8d:a5 192.168.0.107'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-port-2y3pojzsevxv', 'neutron:cidrs': '192.168.0.107/24', 'neutron:device_id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-port-2y3pojzsevxv', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=58a644b5-e3a2-4838-9216-8540447cf0a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:44:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:02.946 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 58a644b5-e3a2-4838-9216-8540447cf0a5 in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe bound to our chassis
Jan 26 16:44:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:02.947 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:44:02 compute-0 systemd-udevd[241391]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:44:02 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:02.969 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7c915e84-25dd-4563-86da-942fedc5efca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:02 compute-0 ovn_controller[97699]: 2026-01-26T16:44:02Z|00046|binding|INFO|Setting lport 58a644b5-e3a2-4838-9216-8540447cf0a5 ovn-installed in OVS
Jan 26 16:44:02 compute-0 ovn_controller[97699]: 2026-01-26T16:44:02Z|00047|binding|INFO|Setting lport 58a644b5-e3a2-4838-9216-8540447cf0a5 up in Southbound
Jan 26 16:44:02 compute-0 nova_compute[185389]: 2026-01-26 16:44:02.970 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:02 compute-0 systemd-machined[156679]: New machine qemu-4-instance-00000004.
Jan 26 16:44:02 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 26 16:44:02 compute-0 NetworkManager[56253]: <info>  [1769445842.9874] device (tap58a644b5-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:44:02 compute-0 NetworkManager[56253]: <info>  [1769445842.9885] device (tap58a644b5-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.023 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea858e1-d7cc-428a-94fa-52cb15037d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.028 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[11480e3b-c274-4ba3-9a6e-c198a0a50691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.069 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cd33dc-2d4d-4bb1-a534-7fc37c729370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.090 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[271fc27d-4915-477c-86bb-f31cfed81f22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 40653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241405, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.112 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[dfefcc30-5e35-4bc9-8a83-6c874fd0094d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241406, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241406, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.116 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.120 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.124 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.125 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.126 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:03.127 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.535 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445843.5352664, a2578f61-3f19-40f4-a32f-97cf22569550 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.536 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] VM Started (Lifecycle Event)
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.700 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.708 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445843.5354037, a2578f61-3f19-40f4-a32f-97cf22569550 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.708 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] VM Paused (Lifecycle Event)
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.734 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.740 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:44:03 compute-0 nova_compute[185389]: 2026-01-26 16:44:03.763 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.214 185393 DEBUG nova.compute.manager [req-a814c163-ef26-453b-8ae5-a386433ac55d req-46c8b946-5591-4174-ab98-1f7ebed5102a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.214 185393 DEBUG oslo_concurrency.lockutils [req-a814c163-ef26-453b-8ae5-a386433ac55d req-46c8b946-5591-4174-ab98-1f7ebed5102a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.215 185393 DEBUG oslo_concurrency.lockutils [req-a814c163-ef26-453b-8ae5-a386433ac55d req-46c8b946-5591-4174-ab98-1f7ebed5102a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.215 185393 DEBUG oslo_concurrency.lockutils [req-a814c163-ef26-453b-8ae5-a386433ac55d req-46c8b946-5591-4174-ab98-1f7ebed5102a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.215 185393 DEBUG nova.compute.manager [req-a814c163-ef26-453b-8ae5-a386433ac55d req-46c8b946-5591-4174-ab98-1f7ebed5102a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Processing event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.216 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.226 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769445844.2266748, a2578f61-3f19-40f4-a32f-97cf22569550 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.227 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] VM Resumed (Lifecycle Event)
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.230 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.236 185393 INFO nova.virt.libvirt.driver [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance spawned successfully.
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.237 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.247 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.253 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:44:04 compute-0 podman[241414]: 2026-01-26 16:44:04.26158101 +0000 UTC m=+0.112150428 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.264 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.264 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.265 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.266 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.267 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.267 185393 DEBUG nova.virt.libvirt.driver [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.272 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.530 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.663 185393 DEBUG nova.network.neutron [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated VIF entry in instance network info cache for port 58a644b5-e3a2-4838-9216-8540447cf0a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.664 185393 DEBUG nova.network.neutron [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.672 185393 INFO nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Took 12.10 seconds to spawn the instance on the hypervisor.
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.676 185393 DEBUG nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.862 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.864 185393 DEBUG nova.compute.manager [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.865 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.866 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.867 185393 DEBUG oslo_concurrency.lockutils [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.868 185393 DEBUG nova.compute.manager [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] No waiting events found dispatching network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.869 185393 WARNING nova.compute.manager [req-58be049a-d5cc-43ef-860e-f4fe0a9bec45 req-d5a21e4a-1315-4283-aaec-fff76abd4263 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received unexpected event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c for instance with vm_state active and task_state None.
Jan 26 16:44:04 compute-0 nova_compute[185389]: 2026-01-26 16:44:04.956 185393 INFO nova.compute.manager [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Took 13.07 seconds to build instance.
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.078 185393 DEBUG oslo_concurrency.lockutils [None req-ebe6b674-cfdc-43d8-b973-6ee977690768 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.775 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.777 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.779 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.780 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.781 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.784 185393 INFO nova.compute.manager [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Terminating instance
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.787 185393 DEBUG nova.compute.manager [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:44:05 compute-0 kernel: tap7839c0a2-ac (unregistering): left promiscuous mode
Jan 26 16:44:05 compute-0 NetworkManager[56253]: <info>  [1769445845.8326] device (tap7839c0a2-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.844 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:05 compute-0 ovn_controller[97699]: 2026-01-26T16:44:05Z|00048|binding|INFO|Releasing lport 7839c0a2-ac0b-4c45-8c81-670b0c2a638c from this chassis (sb_readonly=0)
Jan 26 16:44:05 compute-0 ovn_controller[97699]: 2026-01-26T16:44:05Z|00049|binding|INFO|Setting lport 7839c0a2-ac0b-4c45-8c81-670b0c2a638c down in Southbound
Jan 26 16:44:05 compute-0 ovn_controller[97699]: 2026-01-26T16:44:05Z|00050|binding|INFO|Removing iface tap7839c0a2-ac ovn-installed in OVS
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.850 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.853 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:33:15 192.168.0.104'], port_security=['fa:16:3e:ed:33:15 192.168.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-pfog52jhyvfv-755q4k2a5zn5-port-hcyclvuzjcab', 'neutron:cidrs': '192.168.0.104/24', 'neutron:device_id': '4243720a-45ff-439a-9753-a7da419082b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-pfog52jhyvfv-755q4k2a5zn5-port-hcyclvuzjcab', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=7839c0a2-ac0b-4c45-8c81-670b0c2a638c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.871 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe unbound from our chassis
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.875 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:44:05 compute-0 nova_compute[185389]: 2026-01-26 16:44:05.876 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:05 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 26 16:44:05 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 6.974s CPU time.
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.903 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[98fab7b2-255a-4bb1-8d50-51dba1c987dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:05 compute-0 systemd-machined[156679]: Machine qemu-3-instance-00000003 terminated.
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.949 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa74d84d-b01c-4615-9cfb-73e550bd1249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.952 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[9999474b-e0b5-46f7-9ef4-d1ec520d3206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:05 compute-0 podman[241437]: 2026-01-26 16:44:05.984357738 +0000 UTC m=+0.096289205 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Jan 26 16:44:05 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:05.983 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd578ed-1bf9-4537-89c4-c216e43661de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.012 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[517382c5-1ade-422d-9ad1-f358c918ec8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 40653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241478, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:06 compute-0 podman[241432]: 2026-01-26 16:44:06.037006834 +0000 UTC m=+0.146343371 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.037 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[fac478e7-d17a-4234-90bb-dcccf7e1a9a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241487, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241487, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.040 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.043 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.051 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.051 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.052 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.052 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:44:06.052 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.090 185393 INFO nova.virt.libvirt.driver [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Instance destroyed successfully.
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.091 185393 DEBUG nova.objects.instance [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'resources' on Instance uuid 4243720a-45ff-439a-9753-a7da419082b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.148 185393 DEBUG nova.virt.libvirt.vif [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-pfog52jhyvfv-755q4k2a5zn5-vnf-p7j2ngljbbm6',id=3,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:44:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-dy056y89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:44:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDg3MTgyNDkyOTk0MDM2OTA3MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 26 16:44:06 compute-0 nova_compute[185389]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDg3MTgyNDkyOTk0MDM2OTA3MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA4NzE4MjQ5Mjk5NDAzNjkwNzA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wODcxODI0OTI5OTQwMzY5MDcwPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=4243720a-45ff-439a-9753-a7da419082b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.149 185393 DEBUG nova.network.os_vif_util [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "address": "fa:16:3e:ed:33:15", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7839c0a2-ac", "ovs_interfaceid": "7839c0a2-ac0b-4c45-8c81-670b0c2a638c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.150 185393 DEBUG nova.network.os_vif_util [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.150 185393 DEBUG os_vif [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.152 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.152 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7839c0a2-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.154 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.156 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.156 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.159 185393 INFO os_vif [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:33:15,bridge_name='br-int',has_traffic_filtering=True,id=7839c0a2-ac0b-4c45-8c81-670b0c2a638c,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7839c0a2-ac')
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.160 185393 INFO nova.virt.libvirt.driver [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Deleting instance files /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2_del
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.161 185393 INFO nova.virt.libvirt.driver [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Deletion of /var/lib/nova/instances/4243720a-45ff-439a-9753-a7da419082b2_del complete
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.238 185393 DEBUG nova.virt.libvirt.host [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.239 185393 INFO nova.virt.libvirt.host [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] UEFI support detected
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.241 185393 INFO nova.compute.manager [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Took 0.45 seconds to destroy the instance on the hypervisor.
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.242 185393 DEBUG oslo.service.loopingcall [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.242 185393 DEBUG nova.compute.manager [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.242 185393 DEBUG nova.network.neutron [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:44:06 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:44:06.148 185393 DEBUG nova.virt.libvirt.vif [None req-f87ec5d6-55 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.697 185393 DEBUG nova.compute.manager [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.698 185393 DEBUG oslo_concurrency.lockutils [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.699 185393 DEBUG oslo_concurrency.lockutils [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.699 185393 DEBUG oslo_concurrency.lockutils [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.699 185393 DEBUG nova.compute.manager [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] No waiting events found dispatching network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:44:06 compute-0 nova_compute[185389]: 2026-01-26 16:44:06.700 185393 WARNING nova.compute.manager [req-430de293-c808-427a-b6c6-f72e4cec2dc0 req-0da4a8e8-fbd9-49d4-a194-de096c4914e6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received unexpected event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 for instance with vm_state active and task_state None.
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.805 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-vif-unplugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.806 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.806 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.806 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.806 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] No waiting events found dispatching network-vif-unplugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.807 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-vif-unplugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.807 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.807 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "4243720a-45ff-439a-9753-a7da419082b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.807 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.807 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] No waiting events found dispatching network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 WARNING nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received unexpected event network-vif-plugged-7839c0a2-ac0b-4c45-8c81-670b0c2a638c for instance with vm_state active and task_state deleting.
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Received event network-changed-7839c0a2-ac0b-4c45-8c81-670b0c2a638c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG nova.compute.manager [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Refreshing instance network info cache due to event network-changed-7839c0a2-ac0b-4c45-8c81-670b0c2a638c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:44:08 compute-0 nova_compute[185389]: 2026-01-26 16:44:08.808 185393 DEBUG nova.network.neutron [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Refreshing network info cache for port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.534 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.637 185393 INFO nova.network.neutron [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Port 7839c0a2-ac0b-4c45-8c81-670b0c2a638c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.637 185393 DEBUG nova.network.neutron [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.666 185393 DEBUG oslo_concurrency.lockutils [req-296425d0-23e9-4e75-a3e3-d97ff7f6b28a req-7bbd2d4b-2049-4a6c-8cd3-81e2ce199096 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-4243720a-45ff-439a-9753-a7da419082b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.785 185393 DEBUG nova.network.neutron [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.803 185393 INFO nova.compute.manager [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Took 3.56 seconds to deallocate network for instance.
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.847 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:09 compute-0 nova_compute[185389]: 2026-01-26 16:44:09.848 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:10 compute-0 nova_compute[185389]: 2026-01-26 16:44:10.129 185393 DEBUG nova.compute.provider_tree [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:44:10 compute-0 nova_compute[185389]: 2026-01-26 16:44:10.144 185393 DEBUG nova.scheduler.client.report [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:44:10 compute-0 nova_compute[185389]: 2026-01-26 16:44:10.163 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:10 compute-0 nova_compute[185389]: 2026-01-26 16:44:10.190 185393 INFO nova.scheduler.client.report [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance 4243720a-45ff-439a-9753-a7da419082b2
Jan 26 16:44:10 compute-0 nova_compute[185389]: 2026-01-26 16:44:10.263 185393 DEBUG oslo_concurrency.lockutils [None req-f87ec5d6-55e2-4d08-a8f1-91f04db09a85 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4243720a-45ff-439a-9753-a7da419082b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:11 compute-0 nova_compute[185389]: 2026-01-26 16:44:11.155 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:14 compute-0 podman[241510]: 2026-01-26 16:44:14.230114945 +0000 UTC m=+0.093102388 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:44:14 compute-0 nova_compute[185389]: 2026-01-26 16:44:14.537 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:16 compute-0 nova_compute[185389]: 2026-01-26 16:44:16.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:19 compute-0 nova_compute[185389]: 2026-01-26 16:44:19.543 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:21 compute-0 nova_compute[185389]: 2026-01-26 16:44:21.088 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769445846.0876062, 4243720a-45ff-439a-9753-a7da419082b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:44:21 compute-0 nova_compute[185389]: 2026-01-26 16:44:21.089 185393 INFO nova.compute.manager [-] [instance: 4243720a-45ff-439a-9753-a7da419082b2] VM Stopped (Lifecycle Event)
Jan 26 16:44:21 compute-0 nova_compute[185389]: 2026-01-26 16:44:21.126 185393 DEBUG nova.compute.manager [None req-e2712eb1-08eb-4c13-b255-87bc6e7c75e8 - - - - - -] [instance: 4243720a-45ff-439a-9753-a7da419082b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:44:21 compute-0 nova_compute[185389]: 2026-01-26 16:44:21.164 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:21 compute-0 podman[241533]: 2026-01-26 16:44:21.224146752 +0000 UTC m=+0.102136995 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 26 16:44:24 compute-0 podman[241552]: 2026-01-26 16:44:24.263517367 +0000 UTC m=+0.131802924 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 26 16:44:24 compute-0 nova_compute[185389]: 2026-01-26 16:44:24.549 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:26 compute-0 nova_compute[185389]: 2026-01-26 16:44:26.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:29 compute-0 nova_compute[185389]: 2026-01-26 16:44:29.552 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:29 compute-0 podman[201244]: time="2026-01-26T16:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:44:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:44:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 26 16:44:30 compute-0 podman[241570]: 2026-01-26 16:44:30.200205773 +0000 UTC m=+0.083540151 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:44:31 compute-0 nova_compute[185389]: 2026-01-26 16:44:31.170 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.338 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.339 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.349 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.352 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a2578f61-3f19-40f4-a32f-97cf22569550 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 16:44:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:31.355 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a2578f61-3f19-40f4-a32f-97cf22569550 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 16:44:31 compute-0 openstack_network_exporter[204387]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:44:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:44:31 compute-0 openstack_network_exporter[204387]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:44:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.183 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 26 Jan 2026 16:44:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c6cf74fe-839a-4e5c-a03b-c4a8464636c6 x-openstack-request-id: req-c6cf74fe-839a-4e5c-a03b-c4a8464636c6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.183 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a2578f61-3f19-40f4-a32f-97cf22569550", "name": "vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y", "status": "ACTIVE", "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "user_id": "3c0ab9326d69400aa6a4a91432885d7f", "metadata": {"metering.server_group": "06b33269-d1c6-4fb9-a44b-be304982a550"}, "hostId": "5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b", "image": {"id": "718285d9-0264-40f4-9fb3-d2faff180284", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/718285d9-0264-40f4-9fb3-d2faff180284"}]}, "flavor": {"id": "c2a8df4d-a1d7-42a3-8279-8c7de8a1a662", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c2a8df4d-a1d7-42a3-8279-8c7de8a1a662"}]}, "created": "2026-01-26T16:43:50Z", "updated": "2026-01-26T16:44:04Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.107", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ac:8d:a5"}, {"version": 4, "addr": "192.168.122.229", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ac:8d:a5"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a2578f61-3f19-40f4-a32f-97cf22569550"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a2578f61-3f19-40f4-a32f-97cf22569550"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-26T16:44:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.183 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a2578f61-3f19-40f4-a32f-97cf22569550 used request id req-c6cf74fe-839a-4e5c-a03b-c4a8464636c6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.185 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.188 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'name': 'vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.188 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:44:32.188992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 podman[241594]: 2026-01-26 16:44:32.255370927 +0000 UTC m=+0.131277050 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.289 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.289 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.290 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.377 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.378 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.378 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.474 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.475 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.475 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.477 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.478 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.478 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.479 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.479 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.479 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.480 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 1488074591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.480 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 10680310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.481 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.483 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.483 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.483 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.484 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.484 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.484 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.485 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.485 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.485 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:44:32.477546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:44:32.482806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:44:32.487308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.491 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.496 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a2578f61-3f19-40f4-a32f-97cf22569550 / tap58a644b5-e3 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.496 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.501 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes.delta volume: 3599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.503 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.503 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y>]
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.504 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T16:44:32.503040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:44:32.505031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.550 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 38680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.583 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 27780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.607 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/cpu volume: 249220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:44:32.609349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.610 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.610 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.613 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.613 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:44:32.611682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:44:32.613142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.614 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.615 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.615 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:44:32.615286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.616 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes volume: 7502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.617 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:44:32.617464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.619 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.619 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:44:32.619079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.620 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:44:32.621371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.621 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.622 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.622 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.624 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.624 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y>]
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T16:44:32.624028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.625 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.626 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:44:32.625811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.626 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.627 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:44:32.628524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.628 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.629 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.629 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance a2578f61-3f19-40f4-a32f-97cf22569550: ceilometer.compute.pollsters.NoVolumeException
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.629 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.631 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.632 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.632 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:44:32.631501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.634 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.635 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.635 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.636 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes volume: 8448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.637 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:44:32.634875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.638 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.639 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.639 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:44:32.638485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:44:32.641563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.675 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.675 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.700 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.700 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.700 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.724 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.724 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.724 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.725 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.726 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.726 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.727 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.727 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.727 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.728 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.728 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.729 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.729 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.731 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.732 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.732 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.733 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.733 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.733 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.734 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.735 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.736 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.737 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.738 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.738 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.739 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 327353926 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.739 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.740 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 1455700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.740 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 489623248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.741 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 79957548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.741 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 54491661 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.743 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.743 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.744 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.744 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.745 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.746 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.746 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.747 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.747 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.748 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.748 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.749 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.750 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.750 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:44:32.726149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:44:32.731537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.751 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:44:32.737451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.752 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.752 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.753 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.753 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.754 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:44:32.743619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.754 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.755 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.755 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.756 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.756 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:44:32.746156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:44:32.752755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.758 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:44:32.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:44:34 compute-0 nova_compute[185389]: 2026-01-26 16:44:34.555 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:35 compute-0 podman[241611]: 2026-01-26 16:44:35.241442653 +0000 UTC m=+0.115346416 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 26 16:44:36 compute-0 nova_compute[185389]: 2026-01-26 16:44:36.173 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:36 compute-0 podman[241633]: 2026-01-26 16:44:36.266367883 +0000 UTC m=+0.136896153 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 16:44:36 compute-0 podman[241632]: 2026-01-26 16:44:36.290106087 +0000 UTC m=+0.165880979 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:44:37 compute-0 ovn_controller[97699]: 2026-01-26T16:44:37Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:8d:a5 192.168.0.107
Jan 26 16:44:37 compute-0 ovn_controller[97699]: 2026-01-26T16:44:37Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:8d:a5 192.168.0.107
Jan 26 16:44:37 compute-0 nova_compute[185389]: 2026-01-26 16:44:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:37 compute-0 nova_compute[185389]: 2026-01-26 16:44:37.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:44:39 compute-0 nova_compute[185389]: 2026-01-26 16:44:39.557 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:39 compute-0 nova_compute[185389]: 2026-01-26 16:44:39.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:39 compute-0 nova_compute[185389]: 2026-01-26 16:44:39.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:44:41 compute-0 nova_compute[185389]: 2026-01-26 16:44:41.005 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:44:41 compute-0 nova_compute[185389]: 2026-01-26 16:44:41.006 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:44:41 compute-0 nova_compute[185389]: 2026-01-26 16:44:41.006 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:44:41 compute-0 nova_compute[185389]: 2026-01-26 16:44:41.177 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:42 compute-0 ovn_controller[97699]: 2026-01-26T16:44:42Z|00051|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.155 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.191 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.192 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.192 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.193 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.193 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.194 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.194 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.243 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.243 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.244 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.244 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.359 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.461 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.462 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.526 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.527 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.599 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.600 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.703 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.719 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.816 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.818 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.907 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:43 compute-0 nova_compute[185389]: 2026-01-26 16:44:43.910 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.009 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.010 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.073 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.080 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.141 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.142 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.204 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.206 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.276 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.278 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.340 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.558 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:44 compute-0 podman[241729]: 2026-01-26 16:44:44.771035453 +0000 UTC m=+0.081042373 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.782 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.783 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=72.38088607788086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.895 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:44:44 compute-0 nova_compute[185389]: 2026-01-26 16:44:44.897 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:44:45 compute-0 nova_compute[185389]: 2026-01-26 16:44:45.035 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:44:45 compute-0 nova_compute[185389]: 2026-01-26 16:44:45.063 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:44:45 compute-0 nova_compute[185389]: 2026-01-26 16:44:45.088 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:44:45 compute-0 nova_compute[185389]: 2026-01-26 16:44:45.089 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:46 compute-0 nova_compute[185389]: 2026-01-26 16:44:46.180 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:47 compute-0 nova_compute[185389]: 2026-01-26 16:44:47.083 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:47 compute-0 nova_compute[185389]: 2026-01-26 16:44:47.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:49 compute-0 nova_compute[185389]: 2026-01-26 16:44:49.562 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:51 compute-0 nova_compute[185389]: 2026-01-26 16:44:51.183 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:52 compute-0 podman[241754]: 2026-01-26 16:44:52.247437334 +0000 UTC m=+0.128562726 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 16:44:54 compute-0 nova_compute[185389]: 2026-01-26 16:44:54.566 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:55 compute-0 podman[241774]: 2026-01-26 16:44:55.237380947 +0000 UTC m=+0.111937444 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:44:56 compute-0 nova_compute[185389]: 2026-01-26 16:44:56.186 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:59 compute-0 nova_compute[185389]: 2026-01-26 16:44:59.569 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:59 compute-0 podman[201244]: time="2026-01-26T16:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:44:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:44:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 26 16:45:01 compute-0 nova_compute[185389]: 2026-01-26 16:45:01.188 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:01 compute-0 podman[241792]: 2026-01-26 16:45:01.211298088 +0000 UTC m=+0.094438558 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:45:01 compute-0 openstack_network_exporter[204387]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:45:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:45:01 compute-0 openstack_network_exporter[204387]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:45:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:45:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:01.722 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:01.726 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:01.727 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:03 compute-0 podman[241815]: 2026-01-26 16:45:03.164436048 +0000 UTC m=+0.060848485 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:45:04 compute-0 nova_compute[185389]: 2026-01-26 16:45:04.574 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:06 compute-0 nova_compute[185389]: 2026-01-26 16:45:06.190 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:06 compute-0 podman[241834]: 2026-01-26 16:45:06.231356492 +0000 UTC m=+0.104035879 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:45:07 compute-0 podman[241854]: 2026-01-26 16:45:07.247458752 +0000 UTC m=+0.112382526 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Jan 26 16:45:07 compute-0 podman[241853]: 2026-01-26 16:45:07.280452569 +0000 UTC m=+0.140650685 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:45:09 compute-0 nova_compute[185389]: 2026-01-26 16:45:09.577 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:11 compute-0 nova_compute[185389]: 2026-01-26 16:45:11.193 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:14 compute-0 nova_compute[185389]: 2026-01-26 16:45:14.579 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:15 compute-0 podman[241904]: 2026-01-26 16:45:15.229832236 +0000 UTC m=+0.100950495 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:45:16 compute-0 nova_compute[185389]: 2026-01-26 16:45:16.197 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:19 compute-0 nova_compute[185389]: 2026-01-26 16:45:19.582 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:21 compute-0 nova_compute[185389]: 2026-01-26 16:45:21.199 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:23 compute-0 podman[241928]: 2026-01-26 16:45:23.201715475 +0000 UTC m=+0.083928652 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible)
Jan 26 16:45:24 compute-0 nova_compute[185389]: 2026-01-26 16:45:24.586 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:26 compute-0 nova_compute[185389]: 2026-01-26 16:45:26.202 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:26 compute-0 podman[241950]: 2026-01-26 16:45:26.203513718 +0000 UTC m=+0.088463055 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute)
Jan 26 16:45:29 compute-0 nova_compute[185389]: 2026-01-26 16:45:29.589 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:29 compute-0 podman[201244]: time="2026-01-26T16:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:45:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:45:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 26 16:45:31 compute-0 nova_compute[185389]: 2026-01-26 16:45:31.205 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:31 compute-0 openstack_network_exporter[204387]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:45:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:45:31 compute-0 openstack_network_exporter[204387]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:45:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:45:32 compute-0 podman[241969]: 2026-01-26 16:45:32.188794658 +0000 UTC m=+0.074291841 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:45:34 compute-0 podman[241993]: 2026-01-26 16:45:34.263278997 +0000 UTC m=+0.135019802 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:45:34 compute-0 nova_compute[185389]: 2026-01-26 16:45:34.591 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:36 compute-0 nova_compute[185389]: 2026-01-26 16:45:36.208 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:37 compute-0 podman[242012]: 2026-01-26 16:45:37.225376631 +0000 UTC m=+0.104711176 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 16:45:37 compute-0 nova_compute[185389]: 2026-01-26 16:45:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:37 compute-0 nova_compute[185389]: 2026-01-26 16:45:37.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:45:38 compute-0 podman[242032]: 2026-01-26 16:45:38.256742106 +0000 UTC m=+0.125622606 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, config_id=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Jan 26 16:45:38 compute-0 podman[242031]: 2026-01-26 16:45:38.273991734 +0000 UTC m=+0.159760653 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 16:45:39 compute-0 nova_compute[185389]: 2026-01-26 16:45:39.594 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:40 compute-0 nova_compute[185389]: 2026-01-26 16:45:40.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:40 compute-0 nova_compute[185389]: 2026-01-26 16:45:40.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:45:40 compute-0 nova_compute[185389]: 2026-01-26 16:45:40.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:45:41 compute-0 nova_compute[185389]: 2026-01-26 16:45:41.212 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:43 compute-0 nova_compute[185389]: 2026-01-26 16:45:43.649 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:45:43 compute-0 nova_compute[185389]: 2026-01-26 16:45:43.650 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:45:43 compute-0 nova_compute[185389]: 2026-01-26 16:45:43.651 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:45:43 compute-0 nova_compute[185389]: 2026-01-26 16:45:43.652 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:45:44 compute-0 nova_compute[185389]: 2026-01-26 16:45:44.596 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.441 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.461 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.462 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.463 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.463 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.464 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.464 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.464 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.503 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.504 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.504 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.504 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.676 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.782 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.784 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.867 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.868 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.963 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:45 compute-0 nova_compute[185389]: 2026-01-26 16:45:45.964 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.028 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.039 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.104 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.106 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.174 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.175 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 podman[242088]: 2026-01-26 16:45:46.218357327 +0000 UTC m=+0.099207297 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.276 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.277 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.356 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.369 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.443 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.445 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.543 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.545 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.625 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.627 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:46 compute-0 nova_compute[185389]: 2026-01-26 16:45:46.689 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.235 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.237 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4792MB free_disk=72.38090515136719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.237 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.238 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.352 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.352 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.353 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.353 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.353 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.481 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.506 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.508 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:45:47 compute-0 nova_compute[185389]: 2026-01-26 16:45:47.508 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:49 compute-0 nova_compute[185389]: 2026-01-26 16:45:49.600 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:49 compute-0 nova_compute[185389]: 2026-01-26 16:45:49.764 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:49 compute-0 nova_compute[185389]: 2026-01-26 16:45:49.765 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:49 compute-0 nova_compute[185389]: 2026-01-26 16:45:49.816 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:51 compute-0 nova_compute[185389]: 2026-01-26 16:45:51.223 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:51.365 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:45:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:51.368 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:45:51 compute-0 nova_compute[185389]: 2026-01-26 16:45:51.367 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:45:52.371 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:45:54 compute-0 podman[242136]: 2026-01-26 16:45:54.222561525 +0000 UTC m=+0.092110115 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 16:45:54 compute-0 nova_compute[185389]: 2026-01-26 16:45:54.603 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:56 compute-0 nova_compute[185389]: 2026-01-26 16:45:56.225 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:57 compute-0 podman[242155]: 2026-01-26 16:45:57.25124637 +0000 UTC m=+0.124599047 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:45:59 compute-0 nova_compute[185389]: 2026-01-26 16:45:59.605 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:59 compute-0 podman[201244]: time="2026-01-26T16:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:45:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:45:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 26 16:46:01 compute-0 nova_compute[185389]: 2026-01-26 16:46:01.228 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:01 compute-0 openstack_network_exporter[204387]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:46:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:46:01 compute-0 openstack_network_exporter[204387]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:46:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:46:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:46:01.724 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:46:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:46:01.725 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:46:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:46:01.726 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:46:03 compute-0 podman[242174]: 2026-01-26 16:46:03.207285706 +0000 UTC m=+0.086283386 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:46:04 compute-0 nova_compute[185389]: 2026-01-26 16:46:04.610 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:05 compute-0 podman[242197]: 2026-01-26 16:46:05.248356105 +0000 UTC m=+0.116877438 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:46:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 16:46:06 compute-0 nova_compute[185389]: 2026-01-26 16:46:06.231 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:08 compute-0 podman[242217]: 2026-01-26 16:46:08.258795795 +0000 UTC m=+0.136733588 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 16:46:08 compute-0 podman[242237]: 2026-01-26 16:46:08.454917675 +0000 UTC m=+0.128639357 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, vcs-type=git, name=ubi9, architecture=x86_64, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:46:08 compute-0 podman[242238]: 2026-01-26 16:46:08.478126366 +0000 UTC m=+0.160538964 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true)
Jan 26 16:46:09 compute-0 nova_compute[185389]: 2026-01-26 16:46:09.612 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:11 compute-0 nova_compute[185389]: 2026-01-26 16:46:11.235 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:14 compute-0 nova_compute[185389]: 2026-01-26 16:46:14.615 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:16 compute-0 nova_compute[185389]: 2026-01-26 16:46:16.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:17 compute-0 podman[242280]: 2026-01-26 16:46:17.231423326 +0000 UTC m=+0.096058323 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:46:19 compute-0 nova_compute[185389]: 2026-01-26 16:46:19.619 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:21 compute-0 nova_compute[185389]: 2026-01-26 16:46:21.241 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:24 compute-0 nova_compute[185389]: 2026-01-26 16:46:24.621 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:24 compute-0 ovn_controller[97699]: 2026-01-26T16:46:24Z|00052|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Jan 26 16:46:25 compute-0 podman[242303]: 2026-01-26 16:46:25.24609466 +0000 UTC m=+0.128824743 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Jan 26 16:46:26 compute-0 nova_compute[185389]: 2026-01-26 16:46:26.245 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:28 compute-0 podman[242322]: 2026-01-26 16:46:28.263653562 +0000 UTC m=+0.133889861 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, tcib_managed=true)
Jan 26 16:46:29 compute-0 nova_compute[185389]: 2026-01-26 16:46:29.625 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:29 compute-0 podman[201244]: time="2026-01-26T16:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:46:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:46:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:46:31 compute-0 nova_compute[185389]: 2026-01-26 16:46:31.249 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.339 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.340 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce40f5c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.355 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.359 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'name': 'vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:46:31.360898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 openstack_network_exporter[204387]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:46:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:46:31 compute-0 openstack_network_exporter[204387]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:46:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.474 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.475 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.476 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.613 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.614 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.744 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.745 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.745 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.746 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.748 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.748 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.749 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.749 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.750 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.750 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:46:31.747612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.751 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 1489415647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.751 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 10680310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.752 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.753 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:46:31.754211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.754 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.755 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.755 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.756 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.756 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.756 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.757 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.758 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.758 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.760 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:46:31.760911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.767 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.774 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 1522 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.780 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.782 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:46:31.782353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.817 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 40350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.860 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 34610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.902 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/cpu volume: 250830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.904 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.904 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.910 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:46:31.904138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.910 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.910 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:46:31.906103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:46:31.909845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.912 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.913 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes volume: 7502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.915 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.916 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.916 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:46:31.912260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:46:31.914247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:46:31.915541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.918 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.918 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.918 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.919 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:46:31.917873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.921 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.921 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.921 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:46:31.920805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.923 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.923 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.923 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.924 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.925 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.926 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.926 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.926 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:46:31.923009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.928 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.incoming.bytes volume: 8448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:46:31.925361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.929 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.930 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.930 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:46:31.927428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:46:31.929506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:46:31.931534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.963 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.963 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.963 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.998 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.999 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:31.999 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.034 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.034 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.035 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:46:32.037003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.037 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.038 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.038 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.038 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.039 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.039 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.039 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.039 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.040 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.041 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.042 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.042 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.042 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.043 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.043 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.043 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.043 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.044 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:46:32.041490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.046 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.046 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.046 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.047 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.047 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.047 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.048 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 489623248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.048 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 79957548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.048 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.latency volume: 54491661 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.050 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.050 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.050 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:46:32.046018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:46:32.049847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:46:32.052386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.052 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.053 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.053 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.053 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.054 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.054 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.054 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.055 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.055 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.056 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.057 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.057 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.057 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.057 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.058 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.058 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.058 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.059 14 DEBUG ceilometer.compute.pollsters [-] 2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:46:32.056512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:46:32.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:46:34 compute-0 podman[242342]: 2026-01-26 16:46:34.252249643 +0000 UTC m=+0.132054772 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:46:34 compute-0 nova_compute[185389]: 2026-01-26 16:46:34.630 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:36 compute-0 podman[242364]: 2026-01-26 16:46:36.201204577 +0000 UTC m=+0.088122116 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 16:46:36 compute-0 nova_compute[185389]: 2026-01-26 16:46:36.251 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:38 compute-0 nova_compute[185389]: 2026-01-26 16:46:38.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:38 compute-0 nova_compute[185389]: 2026-01-26 16:46:38.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:46:39 compute-0 podman[242386]: 2026-01-26 16:46:39.215185901 +0000 UTC m=+0.090403857 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, config_id=kepler, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 26 16:46:39 compute-0 podman[242385]: 2026-01-26 16:46:39.233541339 +0000 UTC m=+0.108972900 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:46:39 compute-0 podman[242384]: 2026-01-26 16:46:39.265442832 +0000 UTC m=+0.150907786 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Jan 26 16:46:39 compute-0 nova_compute[185389]: 2026-01-26 16:46:39.632 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:40 compute-0 nova_compute[185389]: 2026-01-26 16:46:40.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:41 compute-0 nova_compute[185389]: 2026-01-26 16:46:41.253 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:42 compute-0 nova_compute[185389]: 2026-01-26 16:46:42.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:42 compute-0 nova_compute[185389]: 2026-01-26 16:46:42.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:46:44 compute-0 nova_compute[185389]: 2026-01-26 16:46:44.635 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:46 compute-0 nova_compute[185389]: 2026-01-26 16:46:46.256 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:48 compute-0 podman[242444]: 2026-01-26 16:46:48.252233579 +0000 UTC m=+0.099958289 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:46:49 compute-0 nova_compute[185389]: 2026-01-26 16:46:49.639 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:51 compute-0 nova_compute[185389]: 2026-01-26 16:46:51.258 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:54 compute-0 nova_compute[185389]: 2026-01-26 16:46:54.643 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:56 compute-0 podman[242470]: 2026-01-26 16:46:56.215866929 +0000 UTC m=+0.103261007 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 16:46:56 compute-0 nova_compute[185389]: 2026-01-26 16:46:56.261 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:59 compute-0 podman[242492]: 2026-01-26 16:46:59.267034956 +0000 UTC m=+0.133613290 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 16:46:59 compute-0 nova_compute[185389]: 2026-01-26 16:46:59.646 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:59 compute-0 podman[201244]: time="2026-01-26T16:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:46:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:46:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 26 16:47:01 compute-0 nova_compute[185389]: 2026-01-26 16:47:01.263 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:01 compute-0 openstack_network_exporter[204387]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:47:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:47:01 compute-0 openstack_network_exporter[204387]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:47:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:47:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:01.726 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:01.728 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:01.729 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:04 compute-0 nova_compute[185389]: 2026-01-26 16:47:04.649 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:05 compute-0 podman[242509]: 2026-01-26 16:47:05.234759161 +0000 UTC m=+0.101555401 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:47:06 compute-0 nova_compute[185389]: 2026-01-26 16:47:06.266 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:07 compute-0 podman[242532]: 2026-01-26 16:47:07.239737052 +0000 UTC m=+0.113210016 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 16:47:07 compute-0 sshd-session[242530]: Invalid user ubnt from 176.120.22.13 port 47930
Jan 26 16:47:08 compute-0 sshd-session[242530]: Connection reset by invalid user ubnt 176.120.22.13 port 47930 [preauth]
Jan 26 16:47:09 compute-0 nova_compute[185389]: 2026-01-26 16:47:09.652 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:10 compute-0 podman[242555]: 2026-01-26 16:47:10.2375222 +0000 UTC m=+0.100947175 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:47:10 compute-0 podman[242561]: 2026-01-26 16:47:10.248280072 +0000 UTC m=+0.101656524 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.expose-services=, vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30)
Jan 26 16:47:10 compute-0 podman[242554]: 2026-01-26 16:47:10.248334063 +0000 UTC m=+0.135088859 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 16:47:10 compute-0 sshd-session[242552]: Connection reset by authenticating user root 176.120.22.13 port 47944 [preauth]
Jan 26 16:47:11 compute-0 nova_compute[185389]: 2026-01-26 16:47:11.270 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:12 compute-0 sshd-session[242616]: Invalid user admin from 176.120.22.13 port 47960
Jan 26 16:47:13 compute-0 sshd-session[242616]: Connection reset by invalid user admin 176.120.22.13 port 47960 [preauth]
Jan 26 16:47:14 compute-0 nova_compute[185389]: 2026-01-26 16:47:14.654 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:14 compute-0 sshd-session[242618]: Invalid user uucp from 176.120.22.13 port 24108
Jan 26 16:47:15 compute-0 sshd-session[242618]: Connection reset by invalid user uucp 176.120.22.13 port 24108 [preauth]
Jan 26 16:47:16 compute-0 nova_compute[185389]: 2026-01-26 16:47:16.273 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:18 compute-0 sshd-session[242620]: Connection reset by authenticating user root 176.120.22.13 port 24112 [preauth]
Jan 26 16:47:19 compute-0 podman[242622]: 2026-01-26 16:47:19.012581674 +0000 UTC m=+0.138025799 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:47:19 compute-0 nova_compute[185389]: 2026-01-26 16:47:19.658 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:21 compute-0 nova_compute[185389]: 2026-01-26 16:47:21.275 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:23 compute-0 nova_compute[185389]: 2026-01-26 16:47:23.598 185393 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 68.88 sec
Jan 26 16:47:24 compute-0 nova_compute[185389]: 2026-01-26 16:47:24.659 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:25 compute-0 nova_compute[185389]: 2026-01-26 16:47:25.694 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:25 compute-0 nova_compute[185389]: 2026-01-26 16:47:25.695 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:25 compute-0 nova_compute[185389]: 2026-01-26 16:47:25.695 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:47:26 compute-0 nova_compute[185389]: 2026-01-26 16:47:26.277 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:27 compute-0 podman[242646]: 2026-01-26 16:47:27.194207907 +0000 UTC m=+0.087886751 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.662 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:29 compute-0 podman[201244]: time="2026-01-26T16:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:47:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:47:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.842 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ad24fa25-1660-453a-ad2c-f873360adfae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.843 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ad24fa25-1660-453a-ad2c-f873360adfae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.869 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.979 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.981 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.994 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:47:29 compute-0 nova_compute[185389]: 2026-01-26 16:47:29.994 185393 INFO nova.compute.claims [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:47:30 compute-0 podman[242667]: 2026-01-26 16:47:30.200738105 +0000 UTC m=+0.094968953 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 26 16:47:30 compute-0 nova_compute[185389]: 2026-01-26 16:47:30.947 185393 DEBUG nova.compute.provider_tree [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:30 compute-0 nova_compute[185389]: 2026-01-26 16:47:30.972 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.081 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.083 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.282 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.285 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.285 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.402 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:47:31 compute-0 openstack_network_exporter[204387]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:47:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:47:31 compute-0 openstack_network_exporter[204387]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:47:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:47:31 compute-0 nova_compute[185389]: 2026-01-26 16:47:31.454 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.655 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.658 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.659 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Creating image(s)
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.659 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.660 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.661 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.680 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.757 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.758 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.759 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.771 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.837 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.838 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.888 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3,backing_fmt=raw /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.889 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.890 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.957 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.958 185393 DEBUG nova.virt.disk.api [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 16:47:32 compute-0 nova_compute[185389]: 2026-01-26 16:47:32.958 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.036 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.037 185393 DEBUG nova.virt.disk.api [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.038 185393 DEBUG nova.objects.instance [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid ad24fa25-1660-453a-ad2c-f873360adfae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.088 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.089 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.090 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.104 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.170 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.171 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.172 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.187 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.248 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.249 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.291 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.293 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.293 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.368 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.369 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.370 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Ensure instance console log exists: /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.372 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.373 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:33 compute-0 nova_compute[185389]: 2026-01-26 16:47:33.373 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.513 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.531 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.532 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.533 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.533 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.533 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.533 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.533 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.665 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.676 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.677 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.677 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.677 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.836 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.898 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.899 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.965 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:34 compute-0 nova_compute[185389]: 2026-01-26 16:47:34.966 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.028 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.029 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.107 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.118 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.181 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.182 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.258 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.259 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.335 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.336 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.407 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.416 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.486 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.489 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.557 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.559 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.630 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.632 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:35 compute-0 nova_compute[185389]: 2026-01-26 16:47:35.703 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.123 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.124 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4781MB free_disk=72.37850189208984GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.125 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.126 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:36 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:36.158 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:47:36 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:36.159 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.165 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:36 compute-0 podman[242750]: 2026-01-26 16:47:36.191878223 +0000 UTC m=+0.078024785 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.223 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.223 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.224 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.224 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.224 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.224 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.287 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.358 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.384 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.417 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.418 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.446 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Successfully updated port: 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.479 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.479 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.480 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.680 185393 DEBUG nova.compute.manager [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Received event network-changed-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.680 185393 DEBUG nova.compute.manager [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Refreshing instance network info cache due to event network-changed-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.682 185393 DEBUG oslo_concurrency.lockutils [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:36 compute-0 nova_compute[185389]: 2026-01-26 16:47:36.827 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:47:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:37.161 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.968 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.969 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.969 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.969 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.969 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.971 185393 INFO nova.compute.manager [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Terminating instance
Jan 26 16:47:37 compute-0 nova_compute[185389]: 2026-01-26 16:47:37.972 185393 DEBUG nova.compute.manager [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:47:38 compute-0 kernel: tap5e252863-18 (unregistering): left promiscuous mode
Jan 26 16:47:38 compute-0 NetworkManager[56253]: <info>  [1769446058.0186] device (tap5e252863-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.031 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 ovn_controller[97699]: 2026-01-26T16:47:38Z|00053|binding|INFO|Releasing lport 5e252863-184d-4e1e-a33d-6e280cd72b51 from this chassis (sb_readonly=0)
Jan 26 16:47:38 compute-0 ovn_controller[97699]: 2026-01-26T16:47:38Z|00054|binding|INFO|Setting lport 5e252863-184d-4e1e-a33d-6e280cd72b51 down in Southbound
Jan 26 16:47:38 compute-0 ovn_controller[97699]: 2026-01-26T16:47:38Z|00055|binding|INFO|Removing iface tap5e252863-18 ovn-installed in OVS
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.035 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.041 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:38:01 192.168.0.173'], port_security=['fa:16:3e:65:38:01 192.168.0.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-port-7427fbcuf3nf', 'neutron:cidrs': '192.168.0.173/24', 'neutron:device_id': '2ee04f75-dc75-489c-85b5-19cd6d573bf1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-port-7427fbcuf3nf', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=5e252863-184d-4e1e-a33d-6e280cd72b51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.042 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 5e252863-184d-4e1e-a33d-6e280cd72b51 in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe unbound from our chassis
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.044 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.054 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.072 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[c21dace4-8be3-42ce-9849-2670915bda17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 26 16:47:38 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 5min 15.498s CPU time.
Jan 26 16:47:38 compute-0 systemd-machined[156679]: Machine qemu-2-instance-00000002 terminated.
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.115 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e12100-f08a-4bd9-8c9a-00f1df7ece6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.120 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb98a6a0-90ee-41a0-a30f-301142f8b6b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.159 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9e9627-3cff-47eb-8a30-bf6c84415506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 podman[242774]: 2026-01-26 16:47:38.16383992 +0000 UTC m=+0.109486956 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.178 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[52414c61-5582-46b4-af20-495e4023f575]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 40653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242803, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.199 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[68a3d90d-42f2-40f3-879d-db57ed1d14d6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242804, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242804, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.203 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.205 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.217 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.218 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.218 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.218 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:47:38.219 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.271 185393 INFO nova.virt.libvirt.driver [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Instance destroyed successfully.
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.271 185393 DEBUG nova.objects.instance [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'resources' on Instance uuid 2ee04f75-dc75-489c-85b5-19cd6d573bf1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.880 185393 DEBUG nova.virt.libvirt.vif [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:38:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-yjgyfumqtzbt-dg43u2kjlm35-vnf-tep2eibpzhxe',id=2,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:38:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-tcf070kr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:38:55Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 26 16:47:38 compute-0 nova_compute[185389]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzI0MDQxNDYyOTM4MTg3OTY0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMyNDA0MTQ2MjkzODE4Nzk2NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMjQwNDE0NjI5MzgxODc5NjQxPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=2ee04f75-dc75-489c-85b5-19cd6d573bf1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.881 185393 DEBUG nova.network.os_vif_util [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.882 185393 DEBUG nova.network.os_vif_util [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.882 185393 DEBUG os_vif [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.884 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.884 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e252863-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.886 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.889 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.893 185393 INFO os_vif [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:38:01,bridge_name='br-int',has_traffic_filtering=True,id=5e252863-184d-4e1e-a33d-6e280cd72b51,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5e252863-18')
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.894 185393 INFO nova.virt.libvirt.driver [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Deleting instance files /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1_del
Jan 26 16:47:38 compute-0 nova_compute[185389]: 2026-01-26 16:47:38.895 185393 INFO nova.virt.libvirt.driver [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Deletion of /var/lib/nova/instances/2ee04f75-dc75-489c-85b5-19cd6d573bf1_del complete
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.047 185393 DEBUG nova.compute.manager [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Received event network-changed-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.047 185393 DEBUG nova.compute.manager [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Refreshing instance network info cache due to event network-changed-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.048 185393 DEBUG oslo_concurrency.lockutils [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.118 185393 INFO nova.compute.manager [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.119 185393 DEBUG oslo.service.loopingcall [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.119 185393 DEBUG nova.compute.manager [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.120 185393 DEBUG nova.network.neutron [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:47:39 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:47:38.880 185393 DEBUG nova.virt.libvirt.vif [None req-7c0a1ba7-14 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.658 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.667 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.706 185393 WARNING nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 2 instances on the hypervisor.
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.707 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.708 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid 2ee04f75-dc75-489c-85b5-19cd6d573bf1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.708 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid a2578f61-3f19-40f4-a32f-97cf22569550 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.709 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid ad24fa25-1660-453a-ad2c-f873360adfae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.710 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.710 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.711 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.712 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.713 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.714 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "ad24fa25-1660-453a-ad2c-f873360adfae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.796 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.806 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:39 compute-0 nova_compute[185389]: 2026-01-26 16:47:39.808 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.239 185393 DEBUG nova.compute.manager [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-changed-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.239 185393 DEBUG nova.compute.manager [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Refreshing instance network info cache due to event network-changed-5e252863-184d-4e1e-a33d-6e280cd72b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.240 185393 DEBUG oslo_concurrency.lockutils [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.240 185393 DEBUG oslo_concurrency.lockutils [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.240 185393 DEBUG nova.network.neutron [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Refreshing network info cache for port 5e252863-184d-4e1e-a33d-6e280cd72b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.884 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [{"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.917 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.917 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance network_info: |[{"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.918 185393 DEBUG oslo_concurrency.lockutils [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.918 185393 DEBUG nova.network.neutron [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Refreshing network info cache for port 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.921 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Start _get_guest_xml network_info=[{"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '718285d9-0264-40f4-9fb3-d2faff180284'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.929 185393 WARNING nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.941 185393 DEBUG nova.virt.libvirt.host [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.942 185393 DEBUG nova.virt.libvirt.host [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.948 185393 DEBUG nova.virt.libvirt.host [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.949 185393 DEBUG nova.virt.libvirt.host [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.950 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.951 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T16:35:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c2a8df4d-a1d7-42a3-8279-8c7de8a1a662',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T16:35:52Z,direct_url=<?>,disk_format='qcow2',id=718285d9-0264-40f4-9fb3-d2faff180284,min_disk=0,min_ram=0,name='cirros',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T16:35:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.951 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.952 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.952 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.952 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.953 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.953 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.953 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.954 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.954 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.954 185393 DEBUG nova.virt.hardware [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.958 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:47:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',id=5,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-3ekc1qq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:47:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 26 16:47:40 compute-0 nova_compute[185389]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=ad24fa25-1660-453a-ad2c-f873360adfae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.958 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.959 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.960 185393 DEBUG nova.objects.instance [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid ad24fa25-1660-453a-ad2c-f873360adfae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.981 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <uuid>ad24fa25-1660-453a-ad2c-f873360adfae</uuid>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <name>instance-00000005</name>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <metadata>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:name>vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo</nova:name>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 16:47:40</nova:creationTime>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:flavor name="m1.small">
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="718285d9-0264-40f4-9fb3-d2faff180284"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         <nova:port uuid="97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d">
Jan 26 16:47:40 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="192.168.0.124" ipVersion="4"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </metadata>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <system>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="serial">ad24fa25-1660-453a-ad2c-f873360adfae</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="uuid">ad24fa25-1660-453a-ad2c-f873360adfae</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </system>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <os>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </os>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <features>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <apic/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </features>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </clock>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </cpu>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   <devices>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.config"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </disk>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:a4:27:56"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <target dev="tap97c831fb-1c"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </interface>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/console.log" append="off"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </serial>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <video>
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </video>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </rng>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 16:47:40 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 16:47:40 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 16:47:40 compute-0 nova_compute[185389]:   </devices>
Jan 26 16:47:40 compute-0 nova_compute[185389]: </domain>
Jan 26 16:47:40 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.982 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Preparing to wait for external event network-vif-plugged-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.993 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ad24fa25-1660-453a-ad2c-f873360adfae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.994 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ad24fa25-1660-453a-ad2c-f873360adfae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.994 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ad24fa25-1660-453a-ad2c-f873360adfae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.995 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:47:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',id=5,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-3ekc1qq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:47:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 26 16:47:40 compute-0 nova_compute[185389]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=ad24fa25-1660-453a-ad2c-f873360adfae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.996 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.996 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.997 185393 DEBUG os_vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.997 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.998 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:40 compute-0 nova_compute[185389]: 2026-01-26 16:47:40.998 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.002 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.002 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97c831fb-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.003 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97c831fb-1c, col_values=(('external_ids', {'iface-id': '97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:27:56', 'vm-uuid': 'ad24fa25-1660-453a-ad2c-f873360adfae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:47:41 compute-0 NetworkManager[56253]: <info>  [1769446061.0058] manager: (tap97c831fb-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.004 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.007 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.013 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.015 185393 INFO os_vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c')
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.088 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.089 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.089 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.089 185393 DEBUG nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No VIF found with MAC fa:16:3e:a4:27:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.090 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Using config drive
Jan 26 16:47:41 compute-0 podman[242832]: 2026-01-26 16:47:41.1564406 +0000 UTC m=+0.087769558 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=)
Jan 26 16:47:41 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:47:40.958 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:47:41 compute-0 podman[242831]: 2026-01-26 16:47:41.185818006 +0000 UTC m=+0.120667920 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:47:41 compute-0 podman[242830]: 2026-01-26 16:47:41.194882171 +0000 UTC m=+0.129828647 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:47:41 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:47:40.995 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.650 185393 INFO nova.network.neutron [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Port 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.651 185393 DEBUG nova.network.neutron [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.680 185393 DEBUG oslo_concurrency.lockutils [req-4ff30d35-1115-4870-acf1-ca48ed3554cf req-5414ed47-950c-4993-a18a-53acd9263eba 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.681 185393 DEBUG oslo_concurrency.lockutils [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.681 185393 DEBUG nova.network.neutron [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Refreshing network info cache for port 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.711 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Creating config drive at /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.config
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.719 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp858ps1pd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.798 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.847 185393 DEBUG oslo_concurrency.processutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp858ps1pd" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:41 compute-0 kernel: tap97c831fb-1c: entered promiscuous mode
Jan 26 16:47:41 compute-0 NetworkManager[56253]: <info>  [1769446061.9418] manager: (tap97c831fb-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.953 185393 DEBUG nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-vif-unplugged-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.953 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.954 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.954 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] No waiting events found dispatching network-vif-unplugged-5e252863-184d-4e1e-a33d-6e280cd72b51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-vif-unplugged-5e252863-184d-4e1e-a33d-6e280cd72b51 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.955 185393 DEBUG oslo_concurrency.lockutils [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.956 185393 DEBUG nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] No waiting events found dispatching network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.956 185393 WARNING nova.compute.manager [req-9270b5c6-f72d-4d93-9ddc-41ac55503ee2 req-3cb33c61-2ede-4b0c-afc3-20a62ce51442 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Received unexpected event network-vif-plugged-5e252863-184d-4e1e-a33d-6e280cd72b51 for instance with vm_state active and task_state deleting.
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.956 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.971 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 systemd-udevd[242904]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:47:41 compute-0 nova_compute[185389]: 2026-01-26 16:47:41.989 185393 DEBUG nova.network.neutron [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:47:41 compute-0 NetworkManager[56253]: <info>  [1769446061.9984] device (tap97c831fb-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:47:41 compute-0 NetworkManager[56253]: <info>  [1769446061.9991] device (tap97c831fb-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:47:42 compute-0 systemd-machined[156679]: New machine qemu-5-instance-00000005.
Jan 26 16:47:42 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 26 16:47:42 compute-0 nova_compute[185389]: 2026-01-26 16:47:42.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:42 compute-0 nova_compute[185389]: 2026-01-26 16:47:42.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:47:42 compute-0 nova_compute[185389]: 2026-01-26 16:47:42.860 185393 DEBUG nova.network.neutron [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:42 compute-0 nova_compute[185389]: 2026-01-26 16:47:42.882 185393 DEBUG oslo_concurrency.lockutils [req-62ab04b3-6f66-41b4-a015-3f73cba9dd54 req-2d6d1156-5e10-40b3-91cb-7bf142f589a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.097 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.098 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.098 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.101 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769446063.10077, ad24fa25-1660-453a-ad2c-f873360adfae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.101 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] VM Started (Lifecycle Event)
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.127 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.133 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769446063.1028056, ad24fa25-1660-453a-ad2c-f873360adfae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.133 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] VM Paused (Lifecycle Event)
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.152 185393 DEBUG nova.network.neutron [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.181 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.186 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.191 185393 INFO nova.compute.manager [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Took 4.07 seconds to deallocate network for instance.
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.232 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.270 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.271 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.709 185393 DEBUG nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.939 185393 DEBUG nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:47:43 compute-0 nova_compute[185389]: 2026-01-26 16:47:43.940 185393 DEBUG nova.compute.provider_tree [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.055 185393 DEBUG nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.080 185393 DEBUG nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.116 185393 DEBUG nova.compute.manager [req-0472eae6-56b3-49e2-8fce-dec0f9087712 req-898939f9-92f0-4521-9893-8e43c4bf7acd 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Received event network-vif-deleted-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.249 185393 DEBUG nova.compute.provider_tree [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.270 185393 DEBUG nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.320 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.377 185393 INFO nova.scheduler.client.report [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance 2ee04f75-dc75-489c-85b5-19cd6d573bf1
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.454 185393 DEBUG oslo_concurrency.lockutils [None req-7c0a1ba7-1466-484c-89ae-4e3114c7dd40 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.455 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.455 185393 INFO nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.456 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "2ee04f75-dc75-489c-85b5-19cd6d573bf1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.489 185393 DEBUG nova.network.neutron [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updated VIF entry in instance network info cache for port 5e252863-184d-4e1e-a33d-6e280cd72b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.490 185393 DEBUG nova.network.neutron [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Updating instance_info_cache with network_info: [{"id": "5e252863-184d-4e1e-a33d-6e280cd72b51", "address": "fa:16:3e:65:38:01", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e252863-18", "ovs_interfaceid": "5e252863-184d-4e1e-a33d-6e280cd72b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.592 185393 DEBUG oslo_concurrency.lockutils [req-d0a4a141-d8a8-4777-b88b-8c2015a190ef req-1135322b-1d9f-4cae-9b80-0a3b605135f8 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-2ee04f75-dc75-489c-85b5-19cd6d573bf1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:44 compute-0 nova_compute[185389]: 2026-01-26 16:47:44.668 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:44 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 16:47:44 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.005 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.383 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.403 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.404 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.407 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.408 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.408 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.456 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.457 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.457 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.458 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.606 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.674 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.676 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.736 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.738 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.825 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.826 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.894 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.901 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.962 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:46 compute-0 nova_compute[185389]: 2026-01-26 16:47:46.963 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.029 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.031 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.089 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.090 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.165 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.171 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.232 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.233 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.328 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.329 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.394 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.396 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.461 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.837 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.838 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=72.39984512329102GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.839 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:47 compute-0 nova_compute[185389]: 2026-01-26 16:47:47.839 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.393 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.393 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.394 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.394 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.395 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.724 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.745 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.748 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.748 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:48 compute-0 nova_compute[185389]: 2026-01-26 16:47:48.749 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:49 compute-0 nova_compute[185389]: 2026-01-26 16:47:49.113 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:49 compute-0 nova_compute[185389]: 2026-01-26 16:47:49.114 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:49 compute-0 podman[242980]: 2026-01-26 16:47:49.207779742 +0000 UTC m=+0.077566042 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:47:49 compute-0 nova_compute[185389]: 2026-01-26 16:47:49.671 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:50 compute-0 nova_compute[185389]: 2026-01-26 16:47:50.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:50 compute-0 nova_compute[185389]: 2026-01-26 16:47:50.769 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:51 compute-0 nova_compute[185389]: 2026-01-26 16:47:51.007 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:53 compute-0 nova_compute[185389]: 2026-01-26 16:47:53.269 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769446058.2676656, 2ee04f75-dc75-489c-85b5-19cd6d573bf1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:47:53 compute-0 nova_compute[185389]: 2026-01-26 16:47:53.270 185393 INFO nova.compute.manager [-] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] VM Stopped (Lifecycle Event)
Jan 26 16:47:53 compute-0 nova_compute[185389]: 2026-01-26 16:47:53.318 185393 DEBUG nova.compute.manager [None req-c7237368-f313-4039-aff6-c58a1de78337 - - - - - -] [instance: 2ee04f75-dc75-489c-85b5-19cd6d573bf1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:47:54 compute-0 nova_compute[185389]: 2026-01-26 16:47:54.673 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:56 compute-0 nova_compute[185389]: 2026-01-26 16:47:56.012 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:58 compute-0 podman[243004]: 2026-01-26 16:47:58.190574082 +0000 UTC m=+0.074046047 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:47:59 compute-0 nova_compute[185389]: 2026-01-26 16:47:59.676 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:59 compute-0 podman[201244]: time="2026-01-26T16:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:47:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:47:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 16:48:01 compute-0 nova_compute[185389]: 2026-01-26 16:48:01.015 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:01 compute-0 podman[243025]: 2026-01-26 16:48:01.228373594 +0000 UTC m=+0.110439852 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:48:01 compute-0 openstack_network_exporter[204387]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:48:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:48:01 compute-0 openstack_network_exporter[204387]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:48:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:48:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:48:01.727 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:48:01.728 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:48:01.728 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:04 compute-0 nova_compute[185389]: 2026-01-26 16:48:04.679 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:06 compute-0 nova_compute[185389]: 2026-01-26 16:48:06.019 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:07 compute-0 podman[243044]: 2026-01-26 16:48:07.193749838 +0000 UTC m=+0.075926398 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:48:09 compute-0 podman[243068]: 2026-01-26 16:48:09.176811847 +0000 UTC m=+0.065361621 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 16:48:09 compute-0 nova_compute[185389]: 2026-01-26 16:48:09.682 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:11 compute-0 nova_compute[185389]: 2026-01-26 16:48:11.022 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:12 compute-0 podman[243086]: 2026-01-26 16:48:12.222490534 +0000 UTC m=+0.114268386 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:48:12 compute-0 podman[243088]: 2026-01-26 16:48:12.234630612 +0000 UTC m=+0.117827472 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 26 16:48:12 compute-0 podman[243087]: 2026-01-26 16:48:12.257740708 +0000 UTC m=+0.132141210 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:48:14 compute-0 nova_compute[185389]: 2026-01-26 16:48:14.686 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:16 compute-0 nova_compute[185389]: 2026-01-26 16:48:16.025 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:16 compute-0 ovn_controller[97699]: 2026-01-26T16:48:16Z|00056|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 26 16:48:19 compute-0 nova_compute[185389]: 2026-01-26 16:48:19.690 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:20 compute-0 podman[243148]: 2026-01-26 16:48:20.177314666 +0000 UTC m=+0.065212957 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:48:21 compute-0 nova_compute[185389]: 2026-01-26 16:48:21.029 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:24 compute-0 nova_compute[185389]: 2026-01-26 16:48:24.693 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:26 compute-0 nova_compute[185389]: 2026-01-26 16:48:26.034 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:29 compute-0 podman[243172]: 2026-01-26 16:48:29.177681221 +0000 UTC m=+0.065844503 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Jan 26 16:48:29 compute-0 nova_compute[185389]: 2026-01-26 16:48:29.696 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:29 compute-0 podman[201244]: time="2026-01-26T16:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:48:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:48:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 16:48:31 compute-0 nova_compute[185389]: 2026-01-26 16:48:31.037 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.340 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.340 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce44fe30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.414 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.416 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ad24fa25-1660-453a-ad2c-f873360adfae from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 16:48:31 compute-0 openstack_network_exporter[204387]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:48:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:48:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:31.417 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ad24fa25-1660-453a-ad2c-f873360adfae -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 16:48:31 compute-0 openstack_network_exporter[204387]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:48:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:48:32 compute-0 podman[243193]: 2026-01-26 16:48:32.226626345 +0000 UTC m=+0.110412991 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.705 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1649 Content-Type: application/json Date: Mon, 26 Jan 2026 16:48:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fdacfff2-3701-4d45-be3e-8b2effbec3e9 x-openstack-request-id: req-fdacfff2-3701-4d45-be3e-8b2effbec3e9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.705 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ad24fa25-1660-453a-ad2c-f873360adfae", "name": "vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo", "status": "BUILD", "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "user_id": "3c0ab9326d69400aa6a4a91432885d7f", "metadata": {"metering.server_group": "06b33269-d1c6-4fb9-a44b-be304982a550"}, "hostId": "5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b", "image": {"id": "718285d9-0264-40f4-9fb3-d2faff180284", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/718285d9-0264-40f4-9fb3-d2faff180284"}]}, "flavor": {"id": "c2a8df4d-a1d7-42a3-8279-8c7de8a1a662", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c2a8df4d-a1d7-42a3-8279-8c7de8a1a662"}]}, "created": "2026-01-26T16:47:22Z", "updated": "2026-01-26T16:47:32Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ad24fa25-1660-453a-ad2c-f873360adfae"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ad24fa25-1660-453a-ad2c-f873360adfae"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.707 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ad24fa25-1660-453a-ad2c-f873360adfae used request id req-fdacfff2-3701-4d45-be3e-8b2effbec3e9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.709 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ad24fa25-1660-453a-ad2c-f873360adfae', 'name': 'vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'paused', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.712 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:48:32.713051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.785 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.786 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.786 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.857 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.858 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.858 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.935 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.936 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.936 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.937 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.937 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.938 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.938 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:48:32.938007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.938 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.939 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.939 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.939 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.939 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.940 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.940 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:48:32.941129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.941 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.942 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.942 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.942 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.942 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.943 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.943 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.943 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.943 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:48:32.944678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.949 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.954 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ad24fa25-1660-453a-ad2c-f873360adfae / tap97c831fb-1c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.954 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.965 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.966 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.967 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.967 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo>]
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T16:48:32.967101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:48:32.968307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:32.991 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 41760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.016 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.037 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 36020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.039 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.039 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:48:33.039037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.040 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:48:33.041108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.042 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:48:33.042709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.043 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.043 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.044 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.045 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:48:33.044560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:48:33.046399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:48:33.047742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.048 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.048 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.048 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.049 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.050 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:48:33.049489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T16:48:33.051297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.051 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo>]
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.053 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:48:33.052807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.053 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.053 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:48:33.054622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.055 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.055 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance ad24fa25-1660-453a-ad2c-f873360adfae: ceilometer.compute.pollsters.NoVolumeException
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.055 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.056 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.057 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:48:33.056273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:48:33.058368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.058 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.059 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.060 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:48:33.059732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.060 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:48:33.061268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.085 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.087 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.087 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.115 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.115 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.115 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.140 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.141 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.141 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.142 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.143 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.143 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:48:33.142592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.143 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.143 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.144 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.144 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.144 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.144 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:48:33.145619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.146 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.146 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.146 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.146 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.147 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.147 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.147 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.147 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.148 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:48:33.148652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.149 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.149 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.149 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.149 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.150 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.150 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.150 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.150 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.152 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:48:33.151758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.152 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.152 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.153 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.154 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.154 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.154 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.154 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.154 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:48:33.153358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.155 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.155 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.156 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.157 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:48:33.156440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.157 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.157 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.157 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.158 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.158 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.158 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.161 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:48:33.161 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:48:34 compute-0 nova_compute[185389]: 2026-01-26 16:48:34.702 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:36 compute-0 nova_compute[185389]: 2026-01-26 16:48:36.040 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:38 compute-0 podman[243210]: 2026-01-26 16:48:38.23357463 +0000 UTC m=+0.112018525 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:48:39 compute-0 nova_compute[185389]: 2026-01-26 16:48:39.703 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:40 compute-0 podman[243234]: 2026-01-26 16:48:40.182291517 +0000 UTC m=+0.068694831 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:48:40 compute-0 nova_compute[185389]: 2026-01-26 16:48:40.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:40 compute-0 nova_compute[185389]: 2026-01-26 16:48:40.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:48:41 compute-0 nova_compute[185389]: 2026-01-26 16:48:41.044 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:41 compute-0 nova_compute[185389]: 2026-01-26 16:48:41.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:42 compute-0 nova_compute[185389]: 2026-01-26 16:48:42.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:43 compute-0 podman[243252]: 2026-01-26 16:48:43.232542259 +0000 UTC m=+0.108943031 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:48:43 compute-0 podman[243251]: 2026-01-26 16:48:43.236352973 +0000 UTC m=+0.121816701 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:48:43 compute-0 podman[243253]: 2026-01-26 16:48:43.256756335 +0000 UTC m=+0.132980093 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_id=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 26 16:48:44 compute-0 nova_compute[185389]: 2026-01-26 16:48:44.705 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:44 compute-0 nova_compute[185389]: 2026-01-26 16:48:44.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:44 compute-0 nova_compute[185389]: 2026-01-26 16:48:44.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:48:44 compute-0 nova_compute[185389]: 2026-01-26 16:48:44.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:48:44 compute-0 nova_compute[185389]: 2026-01-26 16:48:44.750 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:48:45 compute-0 nova_compute[185389]: 2026-01-26 16:48:45.930 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:48:45 compute-0 nova_compute[185389]: 2026-01-26 16:48:45.930 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:48:45 compute-0 nova_compute[185389]: 2026-01-26 16:48:45.931 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:48:45 compute-0 nova_compute[185389]: 2026-01-26 16:48:45.931 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:48:46 compute-0 nova_compute[185389]: 2026-01-26 16:48:46.049 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.709 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.960 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.980 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.981 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.982 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.983 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:49 compute-0 nova_compute[185389]: 2026-01-26 16:48:49.983 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.013 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.014 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.015 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.015 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.117 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.199 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.200 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.264 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.265 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.327 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.328 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.389 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.399 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.466 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.468 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.530 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.532 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.602 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.603 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.672 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.681 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.744 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.745 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.811 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.813 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.877 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.879 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:50 compute-0 nova_compute[185389]: 2026-01-26 16:48:50.945 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.052 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:51 compute-0 podman[243350]: 2026-01-26 16:48:51.206851841 +0000 UTC m=+0.098014387 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.342 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.343 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4898MB free_disk=72.39915084838867GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.344 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.344 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.642 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.642 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.643 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.643 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.643 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.721 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.742 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.743 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:48:51 compute-0 nova_compute[185389]: 2026-01-26 16:48:51.744 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:52 compute-0 nova_compute[185389]: 2026-01-26 16:48:52.481 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:52 compute-0 nova_compute[185389]: 2026-01-26 16:48:52.481 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:54 compute-0 nova_compute[185389]: 2026-01-26 16:48:54.712 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:56 compute-0 nova_compute[185389]: 2026-01-26 16:48:56.056 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:59 compute-0 nova_compute[185389]: 2026-01-26 16:48:59.715 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:59 compute-0 podman[201244]: time="2026-01-26T16:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:48:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:48:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:49:00 compute-0 podman[243374]: 2026-01-26 16:49:00.315130484 +0000 UTC m=+0.174946691 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, vcs-type=git)
Jan 26 16:49:01 compute-0 nova_compute[185389]: 2026-01-26 16:49:01.060 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:01 compute-0 openstack_network_exporter[204387]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:49:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:49:01 compute-0 openstack_network_exporter[204387]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:49:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:49:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:49:01.728 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:49:01.737 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:49:01.739 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:03 compute-0 podman[243395]: 2026-01-26 16:49:03.254586071 +0000 UTC m=+0.127000256 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260120)
Jan 26 16:49:04 compute-0 nova_compute[185389]: 2026-01-26 16:49:04.719 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:06 compute-0 nova_compute[185389]: 2026-01-26 16:49:06.064 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:09 compute-0 podman[243415]: 2026-01-26 16:49:09.231261895 +0000 UTC m=+0.109397987 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:49:09 compute-0 nova_compute[185389]: 2026-01-26 16:49:09.721 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:11 compute-0 nova_compute[185389]: 2026-01-26 16:49:11.067 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:11 compute-0 podman[243439]: 2026-01-26 16:49:11.205931142 +0000 UTC m=+0.095399097 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 16:49:14 compute-0 podman[243460]: 2026-01-26 16:49:14.200465397 +0000 UTC m=+0.076598016 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 16:49:14 compute-0 podman[243459]: 2026-01-26 16:49:14.21565725 +0000 UTC m=+0.096151947 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:49:14 compute-0 podman[243458]: 2026-01-26 16:49:14.275484387 +0000 UTC m=+0.160081346 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:49:14 compute-0 nova_compute[185389]: 2026-01-26 16:49:14.723 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:16 compute-0 nova_compute[185389]: 2026-01-26 16:49:16.071 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:19 compute-0 nova_compute[185389]: 2026-01-26 16:49:19.726 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:21 compute-0 nova_compute[185389]: 2026-01-26 16:49:21.075 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:22 compute-0 podman[243523]: 2026-01-26 16:49:22.183850782 +0000 UTC m=+0.074367033 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:49:24 compute-0 nova_compute[185389]: 2026-01-26 16:49:24.728 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:26 compute-0 nova_compute[185389]: 2026-01-26 16:49:26.079 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:29 compute-0 nova_compute[185389]: 2026-01-26 16:49:29.729 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:29 compute-0 podman[201244]: time="2026-01-26T16:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:49:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:49:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:49:31 compute-0 nova_compute[185389]: 2026-01-26 16:49:31.083 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:31 compute-0 podman[243546]: 2026-01-26 16:49:31.190510527 +0000 UTC m=+0.082619960 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Jan 26 16:49:31 compute-0 openstack_network_exporter[204387]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:49:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:49:31 compute-0 openstack_network_exporter[204387]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:49:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:49:34 compute-0 podman[243566]: 2026-01-26 16:49:34.187427126 +0000 UTC m=+0.073284285 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260120)
Jan 26 16:49:34 compute-0 nova_compute[185389]: 2026-01-26 16:49:34.733 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:36 compute-0 nova_compute[185389]: 2026-01-26 16:49:36.089 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:39 compute-0 nova_compute[185389]: 2026-01-26 16:49:39.737 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:40 compute-0 podman[243584]: 2026-01-26 16:49:40.228036672 +0000 UTC m=+0.109200192 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:49:40 compute-0 nova_compute[185389]: 2026-01-26 16:49:40.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:40 compute-0 nova_compute[185389]: 2026-01-26 16:49:40.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:49:41 compute-0 nova_compute[185389]: 2026-01-26 16:49:41.091 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:42 compute-0 podman[243607]: 2026-01-26 16:49:42.20053797 +0000 UTC m=+0.089737462 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Jan 26 16:49:42 compute-0 nova_compute[185389]: 2026-01-26 16:49:42.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:44 compute-0 nova_compute[185389]: 2026-01-26 16:49:44.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:44 compute-0 nova_compute[185389]: 2026-01-26 16:49:44.740 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:44 compute-0 podman[243627]: 2026-01-26 16:49:44.819874947 +0000 UTC m=+0.121220000 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:49:44 compute-0 podman[243628]: 2026-01-26 16:49:44.826079795 +0000 UTC m=+0.122547285 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, io.buildah.version=1.29.0, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:49:44 compute-0 podman[243626]: 2026-01-26 16:49:44.864754688 +0000 UTC m=+0.170117250 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:49:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 16:49:46 compute-0 nova_compute[185389]: 2026-01-26 16:49:46.094 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:46 compute-0 nova_compute[185389]: 2026-01-26 16:49:46.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:46 compute-0 nova_compute[185389]: 2026-01-26 16:49:46.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:49:48 compute-0 nova_compute[185389]: 2026-01-26 16:49:48.010 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:49:48 compute-0 nova_compute[185389]: 2026-01-26 16:49:48.011 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:49:48 compute-0 nova_compute[185389]: 2026-01-26 16:49:48.011 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:49:49 compute-0 nova_compute[185389]: 2026-01-26 16:49:49.743 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.162 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.181 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.183 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.184 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.185 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.187 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.232 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.233 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.234 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.234 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.350 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.415 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.416 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.478 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.479 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.549 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.551 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.645 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.653 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.724 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.725 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.791 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.792 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.871 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.873 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.973 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:50 compute-0 nova_compute[185389]: 2026-01-26 16:49:50.980 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.065 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.067 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.098 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.139 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.140 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.212 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.214 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.296 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.779 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.780 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4901MB free_disk=72.39920806884766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.781 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:51 compute-0 nova_compute[185389]: 2026-01-26 16:49:51.781 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.071 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.071 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.071 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.072 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.072 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.157 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.189 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.191 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:49:52 compute-0 nova_compute[185389]: 2026-01-26 16:49:52.192 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:53 compute-0 nova_compute[185389]: 2026-01-26 16:49:53.187 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:53 compute-0 nova_compute[185389]: 2026-01-26 16:49:53.188 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:53 compute-0 podman[243730]: 2026-01-26 16:49:53.211221498 +0000 UTC m=+0.090906694 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:49:53 compute-0 nova_compute[185389]: 2026-01-26 16:49:53.213 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:54 compute-0 nova_compute[185389]: 2026-01-26 16:49:54.746 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:56 compute-0 nova_compute[185389]: 2026-01-26 16:49:56.101 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:59 compute-0 podman[201244]: time="2026-01-26T16:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:49:59 compute-0 nova_compute[185389]: 2026-01-26 16:49:59.749 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:49:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 26 16:50:01 compute-0 nova_compute[185389]: 2026-01-26 16:50:01.103 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:01 compute-0 openstack_network_exporter[204387]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:50:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:50:01 compute-0 openstack_network_exporter[204387]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:50:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:50:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:50:01.729 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:50:01.730 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:50:01.731 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:02 compute-0 podman[243754]: 2026-01-26 16:50:02.217041111 +0000 UTC m=+0.101293527 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:50:04 compute-0 nova_compute[185389]: 2026-01-26 16:50:04.751 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:05 compute-0 podman[243773]: 2026-01-26 16:50:05.19624489 +0000 UTC m=+0.078894077 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:50:06 compute-0 nova_compute[185389]: 2026-01-26 16:50:06.106 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:09 compute-0 nova_compute[185389]: 2026-01-26 16:50:09.754 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:11 compute-0 nova_compute[185389]: 2026-01-26 16:50:11.110 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:11 compute-0 podman[243793]: 2026-01-26 16:50:11.210160654 +0000 UTC m=+0.076816491 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:50:13 compute-0 podman[243817]: 2026-01-26 16:50:13.196702095 +0000 UTC m=+0.078995470 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:50:14 compute-0 nova_compute[185389]: 2026-01-26 16:50:14.756 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:15 compute-0 podman[243836]: 2026-01-26 16:50:15.210435704 +0000 UTC m=+0.094230154 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:50:15 compute-0 podman[243837]: 2026-01-26 16:50:15.242444715 +0000 UTC m=+0.118765653 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Jan 26 16:50:15 compute-0 podman[243835]: 2026-01-26 16:50:15.282881695 +0000 UTC m=+0.169893204 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:50:16 compute-0 nova_compute[185389]: 2026-01-26 16:50:16.113 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:19 compute-0 nova_compute[185389]: 2026-01-26 16:50:19.761 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:21 compute-0 nova_compute[185389]: 2026-01-26 16:50:21.117 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:24 compute-0 podman[243899]: 2026-01-26 16:50:24.178350626 +0000 UTC m=+0.064419794 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:50:24 compute-0 nova_compute[185389]: 2026-01-26 16:50:24.763 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:26 compute-0 nova_compute[185389]: 2026-01-26 16:50:26.121 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:29 compute-0 podman[201244]: time="2026-01-26T16:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:50:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:50:29 compute-0 nova_compute[185389]: 2026-01-26 16:50:29.764 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:50:31 compute-0 nova_compute[185389]: 2026-01-26 16:50:31.125 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.342 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.342 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.353 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ad24fa25-1660-453a-ad2c-f873360adfae', 'name': 'vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'paused', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.356 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:50:31.357576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 openstack_network_exporter[204387]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:50:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:50:31 compute-0 openstack_network_exporter[204387]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:50:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.443 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.444 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.444 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.525 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.526 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.526 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.604 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.605 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.605 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.608 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.609 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.609 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.610 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.610 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:50:31.608028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.611 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.615 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.615 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.616 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.616 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.617 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.617 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.618 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.618 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.619 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:50:31.614341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.620 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:50:31.621474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.626 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.630 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.634 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:50:31.635936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.662 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 43030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.685 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.714 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 37330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:50:31.715323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.716 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:50:31.717171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:50:31.718431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.718 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:50:31.719818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:50:31.721314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:50:31.722476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.723 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.723 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:50:31.724256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.724 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:50:31.725997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.726 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.726 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.726 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:50:31.727481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.727 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance ad24fa25-1660-453a-ad2c-f873360adfae: ceilometer.compute.pollsters.NoVolumeException
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:50:31.729136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.729 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:50:31.730660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.731 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.731 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.731 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:50:31.732252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.732 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:50:31.733845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.755 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.756 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.756 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.785 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.786 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.786 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.810 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.811 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.811 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.812 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.813 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.813 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:50:31.812626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.813 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.813 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.814 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.814 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.814 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.814 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.815 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.816 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:50:31.816152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.816 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.817 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.817 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.817 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.817 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.818 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.818 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.819 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.820 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:50:31.821417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.821 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.822 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.823 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.823 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.823 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.824 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.824 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.824 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.825 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.826 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.827 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:50:31.826263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:50:31.828298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.828 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.829 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.829 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.829 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.830 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.830 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.830 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.831 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.833 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.833 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:50:31.833164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.834 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.834 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.834 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.835 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.835 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.835 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.836 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.836 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.836 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.837 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:50:31.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:50:33 compute-0 podman[243924]: 2026-01-26 16:50:33.184533229 +0000 UTC m=+0.072728479 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, container_name=openstack_network_exporter)
Jan 26 16:50:34 compute-0 nova_compute[185389]: 2026-01-26 16:50:34.767 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:36 compute-0 nova_compute[185389]: 2026-01-26 16:50:36.127 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:36 compute-0 podman[243944]: 2026-01-26 16:50:36.230558112 +0000 UTC m=+0.119569495 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 26 16:50:39 compute-0 nova_compute[185389]: 2026-01-26 16:50:39.769 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:41 compute-0 nova_compute[185389]: 2026-01-26 16:50:41.130 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:41 compute-0 nova_compute[185389]: 2026-01-26 16:50:41.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:41 compute-0 nova_compute[185389]: 2026-01-26 16:50:41.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:50:42 compute-0 podman[243963]: 2026-01-26 16:50:42.181497308 +0000 UTC m=+0.069770120 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:50:44 compute-0 podman[243987]: 2026-01-26 16:50:44.20077289 +0000 UTC m=+0.093675680 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:50:44 compute-0 nova_compute[185389]: 2026-01-26 16:50:44.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:44 compute-0 nova_compute[185389]: 2026-01-26 16:50:44.773 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:45 compute-0 nova_compute[185389]: 2026-01-26 16:50:45.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:46 compute-0 nova_compute[185389]: 2026-01-26 16:50:46.134 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:46 compute-0 podman[244008]: 2026-01-26 16:50:46.214731067 +0000 UTC m=+0.084282024 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc.)
Jan 26 16:50:46 compute-0 podman[244007]: 2026-01-26 16:50:46.218464859 +0000 UTC m=+0.093628668 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 26 16:50:46 compute-0 podman[244006]: 2026-01-26 16:50:46.259383572 +0000 UTC m=+0.136237608 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:50:46 compute-0 nova_compute[185389]: 2026-01-26 16:50:46.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:48 compute-0 nova_compute[185389]: 2026-01-26 16:50:48.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:48 compute-0 nova_compute[185389]: 2026-01-26 16:50:48.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:50:48 compute-0 nova_compute[185389]: 2026-01-26 16:50:48.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:50:48 compute-0 nova_compute[185389]: 2026-01-26 16:50:48.759 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:50:49 compute-0 nova_compute[185389]: 2026-01-26 16:50:49.456 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:50:49 compute-0 nova_compute[185389]: 2026-01-26 16:50:49.457 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:50:49 compute-0 nova_compute[185389]: 2026-01-26 16:50:49.457 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:50:49 compute-0 nova_compute[185389]: 2026-01-26 16:50:49.458 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:50:49 compute-0 nova_compute[185389]: 2026-01-26 16:50:49.775 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.878 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.901 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.901 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.902 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.902 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.928 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.928 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.929 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:50 compute-0 nova_compute[185389]: 2026-01-26 16:50:50.929 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.018 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.096 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.097 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.137 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.163 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.164 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.230 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.231 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.300 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.308 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.378 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.379 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.444 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.446 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.514 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.515 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.577 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.584 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.655 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.656 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.728 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.730 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.798 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.800 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:51 compute-0 nova_compute[185389]: 2026-01-26 16:50:51.863 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.294 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.296 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4889MB free_disk=72.39920806884766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.296 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.297 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.379 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.380 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.380 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.381 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.381 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.492 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.507 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.509 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:50:52 compute-0 nova_compute[185389]: 2026-01-26 16:50:52.509 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:53 compute-0 nova_compute[185389]: 2026-01-26 16:50:53.504 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:53 compute-0 nova_compute[185389]: 2026-01-26 16:50:53.505 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:54 compute-0 nova_compute[185389]: 2026-01-26 16:50:54.778 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:55 compute-0 podman[244102]: 2026-01-26 16:50:55.22004461 +0000 UTC m=+0.093801183 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:50:56 compute-0 nova_compute[185389]: 2026-01-26 16:50:56.141 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:59 compute-0 podman[201244]: time="2026-01-26T16:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:50:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:50:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 26 16:50:59 compute-0 nova_compute[185389]: 2026-01-26 16:50:59.782 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:01 compute-0 nova_compute[185389]: 2026-01-26 16:51:01.144 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:01 compute-0 openstack_network_exporter[204387]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:51:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:51:01 compute-0 openstack_network_exporter[204387]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:51:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:51:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:51:01.731 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:51:01.732 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:51:01.733 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:04 compute-0 podman[244124]: 2026-01-26 16:51:04.20153075 +0000 UTC m=+0.090912234 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Jan 26 16:51:04 compute-0 nova_compute[185389]: 2026-01-26 16:51:04.786 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:06 compute-0 nova_compute[185389]: 2026-01-26 16:51:06.149 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:07 compute-0 podman[244144]: 2026-01-26 16:51:07.185645392 +0000 UTC m=+0.078347703 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, managed_by=edpm_ansible)
Jan 26 16:51:09 compute-0 nova_compute[185389]: 2026-01-26 16:51:09.788 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:11 compute-0 nova_compute[185389]: 2026-01-26 16:51:11.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:13 compute-0 podman[244163]: 2026-01-26 16:51:13.175718253 +0000 UTC m=+0.062429519 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:51:14 compute-0 podman[244186]: 2026-01-26 16:51:14.733069707 +0000 UTC m=+0.062397519 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 16:51:14 compute-0 nova_compute[185389]: 2026-01-26 16:51:14.791 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:16 compute-0 nova_compute[185389]: 2026-01-26 16:51:16.153 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:17 compute-0 podman[244206]: 2026-01-26 16:51:17.242773654 +0000 UTC m=+0.114836916 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:51:17 compute-0 podman[244207]: 2026-01-26 16:51:17.249879967 +0000 UTC m=+0.124244221 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30)
Jan 26 16:51:17 compute-0 podman[244205]: 2026-01-26 16:51:17.275928405 +0000 UTC m=+0.155032699 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:51:19 compute-0 nova_compute[185389]: 2026-01-26 16:51:19.793 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:21 compute-0 nova_compute[185389]: 2026-01-26 16:51:21.156 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:24 compute-0 nova_compute[185389]: 2026-01-26 16:51:24.795 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:26 compute-0 nova_compute[185389]: 2026-01-26 16:51:26.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:26 compute-0 podman[244267]: 2026-01-26 16:51:26.20600326 +0000 UTC m=+0.077929252 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:51:29 compute-0 podman[201244]: time="2026-01-26T16:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:51:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:51:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 16:51:29 compute-0 nova_compute[185389]: 2026-01-26 16:51:29.799 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:31 compute-0 nova_compute[185389]: 2026-01-26 16:51:31.164 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:31 compute-0 openstack_network_exporter[204387]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:51:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:51:31 compute-0 openstack_network_exporter[204387]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:51:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:51:34 compute-0 nova_compute[185389]: 2026-01-26 16:51:34.800 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:35 compute-0 podman[244292]: 2026-01-26 16:51:35.209176533 +0000 UTC m=+0.087719328 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64)
Jan 26 16:51:36 compute-0 nova_compute[185389]: 2026-01-26 16:51:36.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:38 compute-0 podman[244312]: 2026-01-26 16:51:38.216353362 +0000 UTC m=+0.101528873 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:51:39 compute-0 nova_compute[185389]: 2026-01-26 16:51:39.803 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:41 compute-0 nova_compute[185389]: 2026-01-26 16:51:41.171 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:41 compute-0 nova_compute[185389]: 2026-01-26 16:51:41.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:41 compute-0 nova_compute[185389]: 2026-01-26 16:51:41.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:51:44 compute-0 podman[244332]: 2026-01-26 16:51:44.169353603 +0000 UTC m=+0.062175743 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:51:44 compute-0 nova_compute[185389]: 2026-01-26 16:51:44.806 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:45 compute-0 podman[244354]: 2026-01-26 16:51:45.239750047 +0000 UTC m=+0.108821862 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:51:45 compute-0 nova_compute[185389]: 2026-01-26 16:51:45.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:46 compute-0 nova_compute[185389]: 2026-01-26 16:51:46.175 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:46 compute-0 nova_compute[185389]: 2026-01-26 16:51:46.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:47 compute-0 nova_compute[185389]: 2026-01-26 16:51:47.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:48 compute-0 podman[244374]: 2026-01-26 16:51:48.227726784 +0000 UTC m=+0.097823543 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=)
Jan 26 16:51:48 compute-0 podman[244372]: 2026-01-26 16:51:48.2756977 +0000 UTC m=+0.148132452 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 26 16:51:48 compute-0 podman[244373]: 2026-01-26 16:51:48.27571749 +0000 UTC m=+0.141132031 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Jan 26 16:51:49 compute-0 nova_compute[185389]: 2026-01-26 16:51:49.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:49 compute-0 nova_compute[185389]: 2026-01-26 16:51:49.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:51:49 compute-0 nova_compute[185389]: 2026-01-26 16:51:49.808 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:50 compute-0 nova_compute[185389]: 2026-01-26 16:51:50.361 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:51:50 compute-0 nova_compute[185389]: 2026-01-26 16:51:50.362 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:51:50 compute-0 nova_compute[185389]: 2026-01-26 16:51:50.362 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.178 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.912 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.983 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.984 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.984 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:51 compute-0 nova_compute[185389]: 2026-01-26 16:51:51.985 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.027 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.028 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.028 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.029 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.741 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.821 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.823 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.906 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.907 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.971 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:52 compute-0 nova_compute[185389]: 2026-01-26 16:51:52.972 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.031 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.042 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.113 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.115 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.191 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.192 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.265 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.267 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.332 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.339 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.403 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.404 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.469 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.470 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.540 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.542 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:53 compute-0 nova_compute[185389]: 2026-01-26 16:51:53.609 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.001 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.004 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4898MB free_disk=72.39920806884766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.004 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.005 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.673 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.673 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.673 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance ad24fa25-1660-453a-ad2c-f873360adfae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.673 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.673 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.788 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.811 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.918 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.920 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:51:54 compute-0 nova_compute[185389]: 2026-01-26 16:51:54.920 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:56 compute-0 nova_compute[185389]: 2026-01-26 16:51:56.185 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:56 compute-0 nova_compute[185389]: 2026-01-26 16:51:56.916 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:56 compute-0 nova_compute[185389]: 2026-01-26 16:51:56.917 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:56 compute-0 nova_compute[185389]: 2026-01-26 16:51:56.945 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:57 compute-0 podman[244472]: 2026-01-26 16:51:57.216729775 +0000 UTC m=+0.106281073 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:51:59 compute-0 podman[201244]: time="2026-01-26T16:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:51:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:51:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 26 16:51:59 compute-0 nova_compute[185389]: 2026-01-26 16:51:59.813 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:01 compute-0 nova_compute[185389]: 2026-01-26 16:52:01.189 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:01 compute-0 openstack_network_exporter[204387]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:52:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:52:01 compute-0 openstack_network_exporter[204387]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:52:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:52:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:52:01.733 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:52:01.734 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:52:01.734 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:04 compute-0 nova_compute[185389]: 2026-01-26 16:52:04.815 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:06 compute-0 nova_compute[185389]: 2026-01-26 16:52:06.192 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:06 compute-0 podman[244493]: 2026-01-26 16:52:06.202248976 +0000 UTC m=+0.088775395 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Jan 26 16:52:09 compute-0 podman[244512]: 2026-01-26 16:52:09.180275784 +0000 UTC m=+0.067838397 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 26 16:52:09 compute-0 nova_compute[185389]: 2026-01-26 16:52:09.818 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:11 compute-0 nova_compute[185389]: 2026-01-26 16:52:11.196 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:14 compute-0 podman[244532]: 2026-01-26 16:52:14.785783479 +0000 UTC m=+0.101065691 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:52:14 compute-0 nova_compute[185389]: 2026-01-26 16:52:14.821 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:16 compute-0 podman[244556]: 2026-01-26 16:52:16.174629887 +0000 UTC m=+0.064700831 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:52:16 compute-0 nova_compute[185389]: 2026-01-26 16:52:16.199 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:19 compute-0 podman[244575]: 2026-01-26 16:52:19.193917077 +0000 UTC m=+0.077473259 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:52:19 compute-0 podman[244574]: 2026-01-26 16:52:19.248273515 +0000 UTC m=+0.135072315 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:52:19 compute-0 podman[244576]: 2026-01-26 16:52:19.255109452 +0000 UTC m=+0.123960054 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, vcs-type=git, config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, container_name=kepler, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 16:52:19 compute-0 nova_compute[185389]: 2026-01-26 16:52:19.823 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:21 compute-0 nova_compute[185389]: 2026-01-26 16:52:21.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:24 compute-0 nova_compute[185389]: 2026-01-26 16:52:24.824 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:26 compute-0 nova_compute[185389]: 2026-01-26 16:52:26.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:28 compute-0 podman[244638]: 2026-01-26 16:52:28.23681577 +0000 UTC m=+0.104295139 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:52:29 compute-0 podman[201244]: time="2026-01-26T16:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:52:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:52:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:52:29 compute-0 nova_compute[185389]: 2026-01-26 16:52:29.825 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:31 compute-0 nova_compute[185389]: 2026-01-26 16:52:31.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.342 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.343 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.362 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.365 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ad24fa25-1660-453a-ad2c-f873360adfae', 'name': 'vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'paused', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.368 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:52:31.369576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 openstack_network_exporter[204387]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:52:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:52:31 compute-0 openstack_network_exporter[204387]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:52:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.436 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.437 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.437 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.501 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.502 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.502 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.566 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:52:31.569095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.570 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.570 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.570 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.570 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.571 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.571 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.571 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.573 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:52:31.572920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.574 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.574 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.575 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.575 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.575 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:52:31.577922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.582 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.588 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.594 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.595 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.596 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:52:31.597443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.628 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 44300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.655 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.683 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 38610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.685 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.686 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:52:31.686104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.687 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.688 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.689 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.691 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:52:31.689901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.693 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.693 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.694 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.696 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.696 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:52:31.692529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:52:31.696033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.697 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.699 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.701 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:52:31.699702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.702 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:52:31.701742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.702 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.702 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:52:31.703405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.703 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.704 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:52:31.706094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.706 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.706 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.707 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.707 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:52:31.708095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.708 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance ad24fa25-1660-453a-ad2c-f873360adfae: ceilometer.compute.pollsters.NoVolumeException
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.710 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.710 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.710 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:52:31.710035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.711 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:52:31.711729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.712 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.712 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.713 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:52:31.713404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:52:31.714910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.739 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.739 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.740 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.763 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.764 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.765 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.790 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.791 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.791 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.793 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.793 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.793 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:52:31.792896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.794 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.794 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.794 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.794 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.795 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.795 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.796 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.797 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.797 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.797 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.797 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.798 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.798 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.798 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.799 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.800 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.800 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.800 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.800 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.801 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.801 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.801 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.801 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:52:31.796420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:52:31.799673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.802 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.803 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:52:31.803329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.804 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:52:31.805150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.805 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.805 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.806 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.806 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.806 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.806 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.806 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.807 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.807 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:52:31.808377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.809 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.809 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.809 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.809 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.810 14 DEBUG ceilometer.compute.pollsters [-] ad24fa25-1660-453a-ad2c-f873360adfae/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.810 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.810 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.810 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:52:31.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:52:34 compute-0 nova_compute[185389]: 2026-01-26 16:52:34.828 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:36 compute-0 nova_compute[185389]: 2026-01-26 16:52:36.221 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:37 compute-0 podman[244662]: 2026-01-26 16:52:37.228272505 +0000 UTC m=+0.099536620 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 26 16:52:39 compute-0 nova_compute[185389]: 2026-01-26 16:52:39.830 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:40 compute-0 podman[244682]: 2026-01-26 16:52:40.228421616 +0000 UTC m=+0.111360551 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 26 16:52:41 compute-0 nova_compute[185389]: 2026-01-26 16:52:41.224 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:41 compute-0 nova_compute[185389]: 2026-01-26 16:52:41.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:41 compute-0 nova_compute[185389]: 2026-01-26 16:52:41.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.101 185393 WARNING nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Timeout waiting for ['network-vif-plugged-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d'] for instance with vm_state building and task_state spawning. Event states are: network-vif-plugged-97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d: timed out after 300.00 seconds: eventlet.timeout.Timeout: 300 seconds
Jan 26 16:52:43 compute-0 kernel: tap97c831fb-1c (unregistering): left promiscuous mode
Jan 26 16:52:43 compute-0 NetworkManager[56253]: <info>  [1769446363.1337] device (tap97c831fb-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.148 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:43 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 26 16:52:43 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 1.276s CPU time.
Jan 26 16:52:43 compute-0 systemd-machined[156679]: Machine qemu-5-instance-00000005 terminated.
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.395 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:47:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-tzlt2x4t3ov5-sakwwya3bplf-vnf-jp2oxis2stdo',id=5,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-3ekc1qq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:47:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 26 16:52:43 compute-0 nova_compute[185389]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjcyOTk2OTc0MDI5NDQ5MjQyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY3Mjk5Njk3NDAyOTQ0OTI0MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NzI5OTY5NzQwMjk0NDkyNDIwPT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=ad24fa25-1660-453a-ad2c-f873360adfae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.396 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "address": "fa:16:3e:a4:27:56", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.124", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c831fb-1c", "ovs_interfaceid": "97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.397 185393 DEBUG nova.network.os_vif_util [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.397 185393 DEBUG os_vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.399 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:43 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 16:52:43.395 185393 DEBUG nova.virt.libvirt.vif [None req-ea5bcace-b8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.399 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97c831fb-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.401 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.403 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.407 185393 INFO os_vif [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:27:56,bridge_name='br-int',has_traffic_filtering=True,id=97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c831fb-1c')
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.408 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Deleting instance files /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae_del
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.409 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Deletion of /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae_del complete
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Traceback (most recent call last):
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     guest = self._create_guest(
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     next(self.gen)
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self._wait_for_instance_events(
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     actual_event = event.wait()
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     instance_event = self.event.wait()
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     result = hub.switch()
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     return self.greenlet.switch()
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] eventlet.timeout.Timeout: 300 seconds
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] During handling of the above exception, another exception occurred:
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Traceback (most recent call last):
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     yield resources
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self.driver.spawn(context, instance, image_meta,
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self._create_guest_with_network(
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     raise exception.VirtualInterfaceCreateException()
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.469 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.476 185393 INFO nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Terminating instance
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.477 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.478 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:52:43 compute-0 nova_compute[185389]: 2026-01-26 16:52:43.478 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:52:44 compute-0 nova_compute[185389]: 2026-01-26 16:52:44.133 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:52:44 compute-0 nova_compute[185389]: 2026-01-26 16:52:44.833 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:45 compute-0 podman[244730]: 2026-01-26 16:52:45.224008567 +0000 UTC m=+0.093550056 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.413 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.432 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.434 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.441 185393 DEBUG nova.virt.libvirt.driver [-] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.441 185393 INFO nova.virt.libvirt.driver [-] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance destroyed successfully.
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.442 185393 INFO nova.virt.libvirt.driver [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Deletion of /var/lib/nova/instances/ad24fa25-1660-453a-ad2c-f873360adfae_del complete
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.563 185393 INFO nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Took 0.13 seconds to destroy the instance on the hypervisor.
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.565 185393 DEBUG nova.compute.claims [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Aborting claim: <nova.compute.claims.Claim object at 0x7fba902f8be0> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.566 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.566 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.738 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.852 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.853 185393 DEBUG nova.compute.provider_tree [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.928 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:52:45 compute-0 nova_compute[185389]: 2026-01-26 16:52:45.956 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:52:46 compute-0 nova_compute[185389]: 2026-01-26 16:52:46.036 185393 DEBUG nova.compute.provider_tree [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:52:46 compute-0 nova_compute[185389]: 2026-01-26 16:52:46.678 185393 DEBUG nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:52:46 compute-0 nova_compute[185389]: 2026-01-26 16:52:46.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:46 compute-0 nova_compute[185389]: 2026-01-26 16:52:46.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.020 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Traceback (most recent call last):
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     guest = self._create_guest(
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     next(self.gen)
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self._wait_for_instance_events(
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     actual_event = event.wait()
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     instance_event = self.event.wait()
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     result = hub.switch()
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     return self.greenlet.switch()
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] eventlet.timeout.Timeout: 300 seconds
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] During handling of the above exception, another exception occurred:
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Traceback (most recent call last):
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self.driver.spawn(context, instance, image_meta,
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     self._create_guest_with_network(
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae]     raise exception.VirtualInterfaceCreateException()
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.028 185393 ERROR nova.compute.manager [instance: ad24fa25-1660-453a-ad2c-f873360adfae] 
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.030 185393 DEBUG nova.compute.utils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Virtual Interface creation failed notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.031 185393 ERROR nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Build of instance ad24fa25-1660-453a-ad2c-f873360adfae aborted: Failed to allocate the network(s), not rescheduling.: nova.exception.BuildAbortException: Build of instance ad24fa25-1660-453a-ad2c-f873360adfae aborted: Failed to allocate the network(s), not rescheduling.
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.031 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.032 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.032 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.032 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:52:47 compute-0 podman[244753]: 2026-01-26 16:52:47.218270178 +0000 UTC m=+0.084515130 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.380 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.800 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.821 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-ad24fa25-1660-453a-ad2c-f873360adfae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.821 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.822 185393 DEBUG nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.822 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:52:47 compute-0 nova_compute[185389]: 2026-01-26 16:52:47.942 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.059 185393 DEBUG neutronclient.v2_0.client [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.059 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unable to show port 97c831fb-1c5a-4bb0-a6ac-9e8b5bf3600d as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.252 185393 DEBUG nova.network.neutron [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.283 185393 INFO nova.compute.manager [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Took 0.46 seconds to deallocate network for instance.
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.404 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.479 185393 INFO nova.scheduler.client.report [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance ad24fa25-1660-453a-ad2c-f873360adfae
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.480 185393 DEBUG oslo_concurrency.lockutils [None req-ea5bcace-b876-4bbd-a953-d96d20112616 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ad24fa25-1660-453a-ad2c-f873360adfae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 318.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.480 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "ad24fa25-1660-453a-ad2c-f873360adfae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 308.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.480 185393 INFO nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:52:48 compute-0 nova_compute[185389]: 2026-01-26 16:52:48.480 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "ad24fa25-1660-453a-ad2c-f873360adfae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:49 compute-0 nova_compute[185389]: 2026-01-26 16:52:49.021 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:49 compute-0 nova_compute[185389]: 2026-01-26 16:52:49.021 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:49 compute-0 nova_compute[185389]: 2026-01-26 16:52:49.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:49 compute-0 nova_compute[185389]: 2026-01-26 16:52:49.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:52:49 compute-0 nova_compute[185389]: 2026-01-26 16:52:49.836 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:50 compute-0 podman[244774]: 2026-01-26 16:52:50.229133778 +0000 UTC m=+0.111827934 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:52:50 compute-0 podman[244775]: 2026-01-26 16:52:50.235931502 +0000 UTC m=+0.102142029 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, distribution-scope=public, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:52:50 compute-0 podman[244773]: 2026-01-26 16:52:50.260651785 +0000 UTC m=+0.143946688 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:52:50 compute-0 nova_compute[185389]: 2026-01-26 16:52:50.733 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:50 compute-0 nova_compute[185389]: 2026-01-26 16:52:50.734 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:52:50 compute-0 nova_compute[185389]: 2026-01-26 16:52:50.735 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:52:51 compute-0 nova_compute[185389]: 2026-01-26 16:52:51.387 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:52:51 compute-0 nova_compute[185389]: 2026-01-26 16:52:51.388 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:52:51 compute-0 nova_compute[185389]: 2026-01-26 16:52:51.389 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:52:51 compute-0 nova_compute[185389]: 2026-01-26 16:52:51.390 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:52:52 compute-0 nova_compute[185389]: 2026-01-26 16:52:52.964 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:52:52 compute-0 nova_compute[185389]: 2026-01-26 16:52:52.979 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:52:52 compute-0 nova_compute[185389]: 2026-01-26 16:52:52.980 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:52:52 compute-0 nova_compute[185389]: 2026-01-26 16:52:52.981 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:52 compute-0 nova_compute[185389]: 2026-01-26 16:52:52.981 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.011 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.012 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.012 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.012 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.110 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.184 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.186 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.253 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.256 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.327 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.329 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.395 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.404 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.424 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.480 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.482 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.578 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.580 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.657 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.658 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:53 compute-0 nova_compute[185389]: 2026-01-26 16:52:53.723 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.100 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.101 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4865MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.101 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.102 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.211 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.212 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.212 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.212 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.308 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.343 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.366 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.367 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:54 compute-0 nova_compute[185389]: 2026-01-26 16:52:54.839 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:56 compute-0 nova_compute[185389]: 2026-01-26 16:52:56.106 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:56 compute-0 nova_compute[185389]: 2026-01-26 16:52:56.107 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:58 compute-0 nova_compute[185389]: 2026-01-26 16:52:58.395 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769446363.393828, ad24fa25-1660-453a-ad2c-f873360adfae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:52:58 compute-0 nova_compute[185389]: 2026-01-26 16:52:58.396 185393 INFO nova.compute.manager [-] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] VM Stopped (Lifecycle Event)
Jan 26 16:52:58 compute-0 nova_compute[185389]: 2026-01-26 16:52:58.429 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:58 compute-0 nova_compute[185389]: 2026-01-26 16:52:58.431 185393 DEBUG nova.compute.manager [None req-951d3080-eda1-4488-aec5-076b5ca6a2e1 - - - - - -] [instance: ad24fa25-1660-453a-ad2c-f873360adfae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:52:58 compute-0 nova_compute[185389]: 2026-01-26 16:52:58.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:59 compute-0 podman[201244]: time="2026-01-26T16:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:52:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:52:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 26 16:52:59 compute-0 nova_compute[185389]: 2026-01-26 16:52:59.842 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:01 compute-0 openstack_network_exporter[204387]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:53:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:53:01 compute-0 openstack_network_exporter[204387]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:53:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:53:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:53:01.734 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:53:01.735 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:53:01.736 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:03 compute-0 nova_compute[185389]: 2026-01-26 16:53:03.433 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:04 compute-0 nova_compute[185389]: 2026-01-26 16:53:04.845 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:08 compute-0 nova_compute[185389]: 2026-01-26 16:53:08.436 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:09 compute-0 nova_compute[185389]: 2026-01-26 16:53:09.848 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:13 compute-0 nova_compute[185389]: 2026-01-26 16:53:13.441 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:14 compute-0 podman[244861]: 2026-01-26 16:53:14.736063004 +0000 UTC m=+15.613427044 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:53:14 compute-0 podman[244871]: 2026-01-26 16:53:14.73701587 +0000 UTC m=+6.609415204 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:53:14 compute-0 podman[244882]: 2026-01-26 16:53:14.757282682 +0000 UTC m=+3.644147827 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:53:14 compute-0 nova_compute[185389]: 2026-01-26 16:53:14.851 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:16 compute-0 podman[244925]: 2026-01-26 16:53:16.264294701 +0000 UTC m=+0.140048388 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:53:18 compute-0 podman[244950]: 2026-01-26 16:53:18.229095279 +0000 UTC m=+0.108450928 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 26 16:53:18 compute-0 nova_compute[185389]: 2026-01-26 16:53:18.446 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:19 compute-0 nova_compute[185389]: 2026-01-26 16:53:19.854 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:21 compute-0 podman[244971]: 2026-01-26 16:53:21.222734281 +0000 UTC m=+0.096193573 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 26 16:53:21 compute-0 podman[244972]: 2026-01-26 16:53:21.242571981 +0000 UTC m=+0.120065924 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible)
Jan 26 16:53:21 compute-0 podman[244970]: 2026-01-26 16:53:21.269544847 +0000 UTC m=+0.155830679 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:53:23 compute-0 nova_compute[185389]: 2026-01-26 16:53:23.450 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:24 compute-0 nova_compute[185389]: 2026-01-26 16:53:24.857 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:28 compute-0 nova_compute[185389]: 2026-01-26 16:53:28.455 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:29 compute-0 podman[201244]: time="2026-01-26T16:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:53:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:53:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:53:29 compute-0 nova_compute[185389]: 2026-01-26 16:53:29.860 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:31 compute-0 openstack_network_exporter[204387]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:53:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:53:31 compute-0 openstack_network_exporter[204387]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:53:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:53:33 compute-0 nova_compute[185389]: 2026-01-26 16:53:33.460 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:34 compute-0 nova_compute[185389]: 2026-01-26 16:53:34.863 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:38 compute-0 nova_compute[185389]: 2026-01-26 16:53:38.463 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:39 compute-0 nova_compute[185389]: 2026-01-26 16:53:39.865 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:43 compute-0 nova_compute[185389]: 2026-01-26 16:53:43.468 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:43 compute-0 nova_compute[185389]: 2026-01-26 16:53:43.732 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:43 compute-0 nova_compute[185389]: 2026-01-26 16:53:43.733 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:53:44 compute-0 nova_compute[185389]: 2026-01-26 16:53:44.879 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:44 compute-0 podman[245036]: 2026-01-26 16:53:44.9385717 +0000 UTC m=+0.090889548 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:53:44 compute-0 podman[245034]: 2026-01-26 16:53:44.951506553 +0000 UTC m=+0.113713551 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:53:44 compute-0 podman[245035]: 2026-01-26 16:53:44.982232911 +0000 UTC m=+0.138511657 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:53:46 compute-0 nova_compute[185389]: 2026-01-26 16:53:46.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:47 compute-0 podman[245096]: 2026-01-26 16:53:47.205844661 +0000 UTC m=+0.091771512 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:53:48 compute-0 nova_compute[185389]: 2026-01-26 16:53:48.470 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:48 compute-0 nova_compute[185389]: 2026-01-26 16:53:48.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:49 compute-0 podman[245120]: 2026-01-26 16:53:49.197390908 +0000 UTC m=+0.072506258 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:53:49 compute-0 nova_compute[185389]: 2026-01-26 16:53:49.883 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:50 compute-0 nova_compute[185389]: 2026-01-26 16:53:50.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:50 compute-0 nova_compute[185389]: 2026-01-26 16:53:50.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:53:51 compute-0 nova_compute[185389]: 2026-01-26 16:53:51.171 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:53:51 compute-0 nova_compute[185389]: 2026-01-26 16:53:51.172 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:53:51 compute-0 nova_compute[185389]: 2026-01-26 16:53:51.172 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:53:52 compute-0 podman[245141]: 2026-01-26 16:53:52.1964687 +0000 UTC m=+0.078627714 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:53:52 compute-0 podman[245142]: 2026-01-26 16:53:52.211862389 +0000 UTC m=+0.089397778 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 16:53:52 compute-0 podman[245140]: 2026-01-26 16:53:52.25113669 +0000 UTC m=+0.139525134 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.262 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.288 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.289 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.290 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.290 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.290 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.342 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.342 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.343 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.343 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:53:53 compute-0 nova_compute[185389]: 2026-01-26 16:53:53.473 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.142 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.225 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.228 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.294 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.296 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.360 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.362 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.423 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.432 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.495 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.497 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.579 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.581 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.647 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.649 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.723 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:54 compute-0 nova_compute[185389]: 2026-01-26 16:53:54.885 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.144 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.147 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.147 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.148 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.288 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.289 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.289 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.290 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.391 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.410 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.413 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:53:55 compute-0 nova_compute[185389]: 2026-01-26 16:53:55.414 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:56 compute-0 nova_compute[185389]: 2026-01-26 16:53:56.843 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:56 compute-0 nova_compute[185389]: 2026-01-26 16:53:56.844 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:56 compute-0 nova_compute[185389]: 2026-01-26 16:53:56.874 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:58 compute-0 nova_compute[185389]: 2026-01-26 16:53:58.474 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:59 compute-0 podman[201244]: time="2026-01-26T16:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:53:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:53:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 26 16:53:59 compute-0 nova_compute[185389]: 2026-01-26 16:53:59.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:01 compute-0 openstack_network_exporter[204387]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:54:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:54:01 compute-0 openstack_network_exporter[204387]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:54:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:54:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:54:01.736 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:54:01.737 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:54:01.738 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:03 compute-0 nova_compute[185389]: 2026-01-26 16:54:03.478 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:04 compute-0 nova_compute[185389]: 2026-01-26 16:54:04.891 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:08 compute-0 nova_compute[185389]: 2026-01-26 16:54:08.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:09 compute-0 nova_compute[185389]: 2026-01-26 16:54:09.894 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:13 compute-0 nova_compute[185389]: 2026-01-26 16:54:13.488 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:14 compute-0 nova_compute[185389]: 2026-01-26 16:54:14.896 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:15 compute-0 podman[245224]: 2026-01-26 16:54:15.22023521 +0000 UTC m=+0.097516779 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, release=1755695350, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 16:54:15 compute-0 podman[245226]: 2026-01-26 16:54:15.24738554 +0000 UTC m=+0.116533797 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:54:15 compute-0 podman[245225]: 2026-01-26 16:54:15.255539042 +0000 UTC m=+0.120016583 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS)
Jan 26 16:54:18 compute-0 podman[245285]: 2026-01-26 16:54:18.243324383 +0000 UTC m=+0.112203509 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:54:18 compute-0 nova_compute[185389]: 2026-01-26 16:54:18.734 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:18 compute-0 sshd-session[244891]: Connection closed by 101.36.123.102 port 39744 [preauth]
Jan 26 16:54:19 compute-0 nova_compute[185389]: 2026-01-26 16:54:19.899 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:20 compute-0 podman[245309]: 2026-01-26 16:54:20.211819082 +0000 UTC m=+0.093292354 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:54:23 compute-0 podman[245328]: 2026-01-26 16:54:23.195084582 +0000 UTC m=+0.071430178 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:54:23 compute-0 podman[245327]: 2026-01-26 16:54:23.245096486 +0000 UTC m=+0.127159567 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:54:23 compute-0 podman[245329]: 2026-01-26 16:54:23.261864013 +0000 UTC m=+0.118798879 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64)
Jan 26 16:54:23 compute-0 nova_compute[185389]: 2026-01-26 16:54:23.735 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:24 compute-0 nova_compute[185389]: 2026-01-26 16:54:24.901 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:28 compute-0 nova_compute[185389]: 2026-01-26 16:54:28.739 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:29 compute-0 podman[201244]: time="2026-01-26T16:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:54:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:54:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:54:29 compute-0 nova_compute[185389]: 2026-01-26 16:54:29.903 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.344 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.344 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.359 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:54:31.360484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 openstack_network_exporter[204387]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:54:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:54:31 compute-0 openstack_network_exporter[204387]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:54:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.449 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.450 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.450 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.517 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.518 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.518 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.519 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.520 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.520 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.520 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.520 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.521 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:54:31.519615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.522 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:54:31.522269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.523 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.523 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.523 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:54:31.524802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.529 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.533 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:54:31.534466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.555 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 45730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 40070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.578 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:54:31.577679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:54:31.579237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:54:31.580295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.581 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:54:31.581659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:54:31.582936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:54:31.583915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:54:31.585451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.585 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.587 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:54:31.586852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.587 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:54:31.588406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.588 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.589 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:54:31.589661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:54:31.590889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:54:31.592192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:54:31.593738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.618 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.618 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.618 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.644 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.645 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:54:31.645574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.650 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.650 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.651 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.652 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.653 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.653 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.654 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.654 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.655 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:54:31.649291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:54:31.653190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.655 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.656 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.657 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:54:31.656990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.657 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.658 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.660 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:54:31.659276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.660 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.660 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.661 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.661 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.663 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.664 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:54:31.664807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.665 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.665 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.666 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.666 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.666 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.667 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:54:31.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:54:33 compute-0 nova_compute[185389]: 2026-01-26 16:54:33.744 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:34 compute-0 nova_compute[185389]: 2026-01-26 16:54:34.906 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:38 compute-0 nova_compute[185389]: 2026-01-26 16:54:38.749 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:39 compute-0 nova_compute[185389]: 2026-01-26 16:54:39.908 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:43 compute-0 nova_compute[185389]: 2026-01-26 16:54:43.751 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:44 compute-0 nova_compute[185389]: 2026-01-26 16:54:44.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:44 compute-0 nova_compute[185389]: 2026-01-26 16:54:44.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:54:44 compute-0 nova_compute[185389]: 2026-01-26 16:54:44.911 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:46 compute-0 podman[245394]: 2026-01-26 16:54:46.189510498 +0000 UTC m=+0.068072227 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:54:46 compute-0 podman[245392]: 2026-01-26 16:54:46.202937623 +0000 UTC m=+0.087275330 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 16:54:46 compute-0 podman[245393]: 2026-01-26 16:54:46.203227821 +0000 UTC m=+0.081526583 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 16:54:48 compute-0 nova_compute[185389]: 2026-01-26 16:54:48.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:48 compute-0 nova_compute[185389]: 2026-01-26 16:54:48.756 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:49 compute-0 podman[245455]: 2026-01-26 16:54:49.201311788 +0000 UTC m=+0.074026430 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:54:49 compute-0 nova_compute[185389]: 2026-01-26 16:54:49.912 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:50 compute-0 nova_compute[185389]: 2026-01-26 16:54:50.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:51 compute-0 podman[245479]: 2026-01-26 16:54:51.203069714 +0000 UTC m=+0.085887681 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:54:51 compute-0 nova_compute[185389]: 2026-01-26 16:54:51.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:51 compute-0 nova_compute[185389]: 2026-01-26 16:54:51.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:54:51 compute-0 nova_compute[185389]: 2026-01-26 16:54:51.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:54:52 compute-0 nova_compute[185389]: 2026-01-26 16:54:52.910 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:54:52 compute-0 nova_compute[185389]: 2026-01-26 16:54:52.910 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:54:52 compute-0 nova_compute[185389]: 2026-01-26 16:54:52.910 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:54:52 compute-0 nova_compute[185389]: 2026-01-26 16:54:52.911 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:54:53 compute-0 nova_compute[185389]: 2026-01-26 16:54:53.761 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:54 compute-0 podman[245499]: 2026-01-26 16:54:54.198345442 +0000 UTC m=+0.079258882 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, build-date=2024-09-18T21:23:30, config_id=kepler, vcs-type=git, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Jan 26 16:54:54 compute-0 podman[245498]: 2026-01-26 16:54:54.222025868 +0000 UTC m=+0.103227425 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:54:54 compute-0 podman[245497]: 2026-01-26 16:54:54.228157665 +0000 UTC m=+0.115759727 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:54:54 compute-0 nova_compute[185389]: 2026-01-26 16:54:54.914 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:54 compute-0 nova_compute[185389]: 2026-01-26 16:54:54.918 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.114 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.115 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.115 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.116 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.116 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.117 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.767 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.767 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.768 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.768 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.858 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.921 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.922 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.995 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:56 compute-0 nova_compute[185389]: 2026-01-26 16:54:56.997 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.070 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.072 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.137 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.148 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.218 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.220 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.299 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.301 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.369 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.371 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.440 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.811 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.813 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4843MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.813 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:57 compute-0 nova_compute[185389]: 2026-01-26 16:54:57.813 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.196 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.196 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.196 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.197 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.272 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.290 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.292 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.292 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:58 compute-0 nova_compute[185389]: 2026-01-26 16:54:58.765 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:59 compute-0 podman[201244]: time="2026-01-26T16:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:54:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:54:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:54:59 compute-0 nova_compute[185389]: 2026-01-26 16:54:59.917 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:01 compute-0 openstack_network_exporter[204387]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:55:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:55:01 compute-0 openstack_network_exporter[204387]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:55:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:55:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:55:01.737 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:55:01.738 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:55:01.739 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:02 compute-0 nova_compute[185389]: 2026-01-26 16:55:02.286 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:03 compute-0 nova_compute[185389]: 2026-01-26 16:55:03.768 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:04 compute-0 nova_compute[185389]: 2026-01-26 16:55:04.919 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:08 compute-0 nova_compute[185389]: 2026-01-26 16:55:08.769 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:09 compute-0 nova_compute[185389]: 2026-01-26 16:55:09.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:13 compute-0 nova_compute[185389]: 2026-01-26 16:55:13.774 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:14 compute-0 nova_compute[185389]: 2026-01-26 16:55:14.924 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:17 compute-0 podman[245583]: 2026-01-26 16:55:17.222062215 +0000 UTC m=+0.087869936 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:55:17 compute-0 podman[245581]: 2026-01-26 16:55:17.227799431 +0000 UTC m=+0.094910748 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 26 16:55:17 compute-0 podman[245582]: 2026-01-26 16:55:17.249443042 +0000 UTC m=+0.117875644 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:55:18 compute-0 nova_compute[185389]: 2026-01-26 16:55:18.780 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:19 compute-0 nova_compute[185389]: 2026-01-26 16:55:19.927 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:20 compute-0 podman[245644]: 2026-01-26 16:55:20.208651458 +0000 UTC m=+0.096204123 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:55:22 compute-0 podman[245665]: 2026-01-26 16:55:22.179852011 +0000 UTC m=+0.073014341 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:55:23 compute-0 nova_compute[185389]: 2026-01-26 16:55:23.784 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:24 compute-0 nova_compute[185389]: 2026-01-26 16:55:24.930 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:25 compute-0 podman[245685]: 2026-01-26 16:55:25.216628503 +0000 UTC m=+0.088876024 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 26 16:55:25 compute-0 podman[245686]: 2026-01-26 16:55:25.222140893 +0000 UTC m=+0.091939747 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9)
Jan 26 16:55:25 compute-0 podman[245684]: 2026-01-26 16:55:25.24915047 +0000 UTC m=+0.126941472 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:55:28 compute-0 nova_compute[185389]: 2026-01-26 16:55:28.789 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:29 compute-0 podman[201244]: time="2026-01-26T16:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:55:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:55:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:55:29 compute-0 nova_compute[185389]: 2026-01-26 16:55:29.932 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:31 compute-0 openstack_network_exporter[204387]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:55:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:55:31 compute-0 openstack_network_exporter[204387]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:55:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:55:33 compute-0 nova_compute[185389]: 2026-01-26 16:55:33.794 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:34 compute-0 nova_compute[185389]: 2026-01-26 16:55:34.935 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:38 compute-0 nova_compute[185389]: 2026-01-26 16:55:38.800 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:39 compute-0 nova_compute[185389]: 2026-01-26 16:55:39.936 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:43 compute-0 nova_compute[185389]: 2026-01-26 16:55:43.804 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:44 compute-0 nova_compute[185389]: 2026-01-26 16:55:44.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:44 compute-0 nova_compute[185389]: 2026-01-26 16:55:44.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:55:44 compute-0 nova_compute[185389]: 2026-01-26 16:55:44.939 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:48 compute-0 podman[245752]: 2026-01-26 16:55:48.196229017 +0000 UTC m=+0.078706536 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:55:48 compute-0 podman[245750]: 2026-01-26 16:55:48.222270267 +0000 UTC m=+0.113704570 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Jan 26 16:55:48 compute-0 podman[245751]: 2026-01-26 16:55:48.226090022 +0000 UTC m=+0.112198831 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute)
Jan 26 16:55:48 compute-0 nova_compute[185389]: 2026-01-26 16:55:48.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:48 compute-0 nova_compute[185389]: 2026-01-26 16:55:48.807 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:49 compute-0 nova_compute[185389]: 2026-01-26 16:55:49.942 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:50 compute-0 nova_compute[185389]: 2026-01-26 16:55:50.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:51 compute-0 podman[245814]: 2026-01-26 16:55:51.194392964 +0000 UTC m=+0.076573588 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:55:51 compute-0 nova_compute[185389]: 2026-01-26 16:55:51.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:53 compute-0 podman[245837]: 2026-01-26 16:55:53.180202156 +0000 UTC m=+0.068413215 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 16:55:53 compute-0 nova_compute[185389]: 2026-01-26 16:55:53.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:53 compute-0 nova_compute[185389]: 2026-01-26 16:55:53.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:55:53 compute-0 nova_compute[185389]: 2026-01-26 16:55:53.812 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:54 compute-0 nova_compute[185389]: 2026-01-26 16:55:54.853 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:55:54 compute-0 nova_compute[185389]: 2026-01-26 16:55:54.854 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:55:54 compute-0 nova_compute[185389]: 2026-01-26 16:55:54.854 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:55:54 compute-0 nova_compute[185389]: 2026-01-26 16:55:54.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:56 compute-0 podman[245856]: 2026-01-26 16:55:56.211501568 +0000 UTC m=+0.097899700 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:55:56 compute-0 podman[245857]: 2026-01-26 16:55:56.223715471 +0000 UTC m=+0.104512470 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, config_id=kepler, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Jan 26 16:55:56 compute-0 podman[245855]: 2026-01-26 16:55:56.255729184 +0000 UTC m=+0.144050558 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.223 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.246 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.247 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.248 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.248 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.249 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.302 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.304 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.305 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.306 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.408 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.477 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.479 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.555 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.557 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.639 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.640 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.707 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.715 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.776 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.777 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.815 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.852 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.853 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.916 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.917 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:58 compute-0 nova_compute[185389]: 2026-01-26 16:55:58.977 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.344 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.346 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4839MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.346 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.347 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.471 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.471 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.471 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.471 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.530 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.546 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.548 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.549 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:59 compute-0 podman[201244]: time="2026-01-26T16:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:55:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:55:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:55:59 compute-0 nova_compute[185389]: 2026-01-26 16:55:59.948 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:01 compute-0 openstack_network_exporter[204387]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:56:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:56:01 compute-0 openstack_network_exporter[204387]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:56:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:56:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:56:01.738 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:56:01.740 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:56:01.741 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:02 compute-0 nova_compute[185389]: 2026-01-26 16:56:02.547 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:02 compute-0 nova_compute[185389]: 2026-01-26 16:56:02.549 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:03 compute-0 nova_compute[185389]: 2026-01-26 16:56:03.820 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:04 compute-0 nova_compute[185389]: 2026-01-26 16:56:04.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:08 compute-0 nova_compute[185389]: 2026-01-26 16:56:08.822 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:09 compute-0 nova_compute[185389]: 2026-01-26 16:56:09.956 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:13 compute-0 nova_compute[185389]: 2026-01-26 16:56:13.824 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:14 compute-0 nova_compute[185389]: 2026-01-26 16:56:14.957 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:18 compute-0 nova_compute[185389]: 2026-01-26 16:56:18.829 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:19 compute-0 podman[245945]: 2026-01-26 16:56:19.283388432 +0000 UTC m=+0.107192112 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:56:19 compute-0 podman[245944]: 2026-01-26 16:56:19.289294363 +0000 UTC m=+0.113094313 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:56:19 compute-0 podman[245943]: 2026-01-26 16:56:19.299368468 +0000 UTC m=+0.132679098 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350)
Jan 26 16:56:19 compute-0 nova_compute[185389]: 2026-01-26 16:56:19.961 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:22 compute-0 podman[246005]: 2026-01-26 16:56:22.223028112 +0000 UTC m=+0.102808593 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 16:56:23 compute-0 nova_compute[185389]: 2026-01-26 16:56:23.832 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:24 compute-0 podman[246027]: 2026-01-26 16:56:24.211636898 +0000 UTC m=+0.100442450 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:56:24 compute-0 nova_compute[185389]: 2026-01-26 16:56:24.964 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:27 compute-0 podman[246052]: 2026-01-26 16:56:27.249002473 +0000 UTC m=+0.113203747 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, vcs-type=git, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64)
Jan 26 16:56:27 compute-0 podman[246048]: 2026-01-26 16:56:27.24890694 +0000 UTC m=+0.112800395 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 26 16:56:27 compute-0 podman[246047]: 2026-01-26 16:56:27.264981279 +0000 UTC m=+0.149187778 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:28 compute-0 nova_compute[185389]: 2026-01-26 16:56:28.836 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:29 compute-0 podman[201244]: time="2026-01-26T16:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:56:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:56:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 16:56:29 compute-0 nova_compute[185389]: 2026-01-26 16:56:29.969 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.344 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.345 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.362 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.367 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:56:31.368790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 openstack_network_exporter[204387]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:56:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:56:31 compute-0 openstack_network_exporter[204387]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:56:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.464 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.465 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.465 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.541 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.541 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.542 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.543 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.544 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.544 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.544 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.544 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:56:31.543302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.546 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.546 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.546 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.547 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.547 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.547 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:56:31.546131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:56:31.548723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.553 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.556 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:56:31.557873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.579 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 47060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.598 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 41400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.599 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.600 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.600 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:56:31.599648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:56:31.601286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:56:31.602378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.602 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:56:31.603765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.604 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.604 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.605 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:56:31.605504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.607 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.607 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:56:31.606881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:56:31.608324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.608 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.610 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:56:31.610039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.610 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.611 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:56:31.611515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.613 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:56:31.612982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.613 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.614 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:56:31.614597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.616 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:56:31.615796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.616 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.617 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:56:31.617364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.672 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.672 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.672 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.674 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.674 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.675 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:56:31.673844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:56:31.676305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:56:31.678651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:56:31.680698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.682 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.682 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.682 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.682 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.683 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.683 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.683 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:56:31.681928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:56:31.684281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.688 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.689 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.689 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.689 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.689 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:56:31.689 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:56:33 compute-0 nova_compute[185389]: 2026-01-26 16:56:33.839 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:34 compute-0 nova_compute[185389]: 2026-01-26 16:56:34.971 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:38 compute-0 nova_compute[185389]: 2026-01-26 16:56:38.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:39 compute-0 nova_compute[185389]: 2026-01-26 16:56:39.974 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:43 compute-0 nova_compute[185389]: 2026-01-26 16:56:43.846 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:44 compute-0 nova_compute[185389]: 2026-01-26 16:56:44.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:44 compute-0 nova_compute[185389]: 2026-01-26 16:56:44.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:56:44 compute-0 nova_compute[185389]: 2026-01-26 16:56:44.977 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:48 compute-0 nova_compute[185389]: 2026-01-26 16:56:48.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:48 compute-0 nova_compute[185389]: 2026-01-26 16:56:48.852 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:49 compute-0 nova_compute[185389]: 2026-01-26 16:56:49.980 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:50 compute-0 podman[246108]: 2026-01-26 16:56:50.219708764 +0000 UTC m=+0.099313028 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1755695350)
Jan 26 16:56:50 compute-0 podman[246110]: 2026-01-26 16:56:50.225171233 +0000 UTC m=+0.099076342 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:56:50 compute-0 podman[246109]: 2026-01-26 16:56:50.227415264 +0000 UTC m=+0.105667102 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:56:52 compute-0 nova_compute[185389]: 2026-01-26 16:56:52.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:52 compute-0 nova_compute[185389]: 2026-01-26 16:56:52.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:53 compute-0 podman[246170]: 2026-01-26 16:56:53.176186472 +0000 UTC m=+0.059177484 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 16:56:53 compute-0 nova_compute[185389]: 2026-01-26 16:56:53.856 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.960 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.960 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.961 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.962 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:56:54 compute-0 nova_compute[185389]: 2026-01-26 16:56:54.985 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:55 compute-0 podman[246191]: 2026-01-26 16:56:55.174575126 +0000 UTC m=+0.063684028 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.893 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.911 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.911 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.912 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.912 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.913 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.959 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.959 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.959 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:57 compute-0 nova_compute[185389]: 2026-01-26 16:56:57.960 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.070 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.157 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.158 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.226 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.228 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 podman[246213]: 2026-01-26 16:56:58.248898378 +0000 UTC m=+0.130310352 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:56:58 compute-0 podman[246214]: 2026-01-26 16:56:58.268436101 +0000 UTC m=+0.145507038 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 16:56:58 compute-0 podman[246212]: 2026-01-26 16:56:58.269796659 +0000 UTC m=+0.148189771 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.298 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.299 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.361 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.367 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.431 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.432 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.493 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.494 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.554 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.556 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.622 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.859 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.992 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.993 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.994 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:58 compute-0 nova_compute[185389]: 2026-01-26 16:56:58.994 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.081 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.082 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:56:59 compute-0 rsyslogd[235842]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.082 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:56:59 compute-0 rsyslogd[235842]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.082 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.146 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.161 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.163 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.163 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:59 compute-0 podman[201244]: time="2026-01-26T16:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:56:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:56:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 16:56:59 compute-0 nova_compute[185389]: 2026-01-26 16:56:59.986 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:01 compute-0 openstack_network_exporter[204387]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:57:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:57:01 compute-0 openstack_network_exporter[204387]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:57:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:57:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:57:01.739 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:57:01.740 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:57:01.741 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:03 compute-0 nova_compute[185389]: 2026-01-26 16:57:03.159 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:03 compute-0 nova_compute[185389]: 2026-01-26 16:57:03.863 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:04 compute-0 nova_compute[185389]: 2026-01-26 16:57:04.991 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:08 compute-0 nova_compute[185389]: 2026-01-26 16:57:08.868 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:09 compute-0 nova_compute[185389]: 2026-01-26 16:57:09.998 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:13 compute-0 nova_compute[185389]: 2026-01-26 16:57:13.871 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:15 compute-0 nova_compute[185389]: 2026-01-26 16:57:15.003 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:18 compute-0 nova_compute[185389]: 2026-01-26 16:57:18.873 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:20 compute-0 nova_compute[185389]: 2026-01-26 16:57:20.006 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:21 compute-0 podman[246300]: 2026-01-26 16:57:21.228230781 +0000 UTC m=+0.084635758 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:57:21 compute-0 podman[246299]: 2026-01-26 16:57:21.255909126 +0000 UTC m=+0.100268815 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:57:21 compute-0 podman[246298]: 2026-01-26 16:57:21.265376493 +0000 UTC m=+0.115372706 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter)
Jan 26 16:57:23 compute-0 nova_compute[185389]: 2026-01-26 16:57:23.878 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:24 compute-0 podman[246357]: 2026-01-26 16:57:24.203188773 +0000 UTC m=+0.077276747 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:57:25 compute-0 nova_compute[185389]: 2026-01-26 16:57:25.010 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:26 compute-0 podman[246380]: 2026-01-26 16:57:26.19578767 +0000 UTC m=+0.076017506 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:57:28 compute-0 nova_compute[185389]: 2026-01-26 16:57:28.883 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:29 compute-0 podman[246401]: 2026-01-26 16:57:29.236372565 +0000 UTC m=+0.108069659 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 26 16:57:29 compute-0 podman[246400]: 2026-01-26 16:57:29.253269805 +0000 UTC m=+0.131969455 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:57:29 compute-0 podman[246399]: 2026-01-26 16:57:29.273566365 +0000 UTC m=+0.154051683 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:57:29 compute-0 podman[201244]: time="2026-01-26T16:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:57:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:57:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 26 16:57:30 compute-0 nova_compute[185389]: 2026-01-26 16:57:30.012 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:31 compute-0 openstack_network_exporter[204387]: ERROR   16:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:57:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:57:31 compute-0 openstack_network_exporter[204387]: ERROR   16:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:57:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:57:33 compute-0 nova_compute[185389]: 2026-01-26 16:57:33.886 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:35 compute-0 nova_compute[185389]: 2026-01-26 16:57:35.015 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:38 compute-0 nova_compute[185389]: 2026-01-26 16:57:38.641 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:38 compute-0 nova_compute[185389]: 2026-01-26 16:57:38.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:40 compute-0 nova_compute[185389]: 2026-01-26 16:57:40.017 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:43 compute-0 nova_compute[185389]: 2026-01-26 16:57:43.890 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:45 compute-0 nova_compute[185389]: 2026-01-26 16:57:45.021 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:45 compute-0 nova_compute[185389]: 2026-01-26 16:57:45.961 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:45 compute-0 nova_compute[185389]: 2026-01-26 16:57:45.962 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:57:48 compute-0 nova_compute[185389]: 2026-01-26 16:57:48.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:48 compute-0 nova_compute[185389]: 2026-01-26 16:57:48.894 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:50 compute-0 nova_compute[185389]: 2026-01-26 16:57:50.023 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:51 compute-0 nova_compute[185389]: 2026-01-26 16:57:51.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:51 compute-0 nova_compute[185389]: 2026-01-26 16:57:51.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:57:51 compute-0 nova_compute[185389]: 2026-01-26 16:57:51.741 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:57:52 compute-0 podman[246463]: 2026-01-26 16:57:52.207494988 +0000 UTC m=+0.073655913 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 16:57:52 compute-0 podman[246462]: 2026-01-26 16:57:52.226158375 +0000 UTC m=+0.092828773 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:57:52 compute-0 podman[246461]: 2026-01-26 16:57:52.23797466 +0000 UTC m=+0.111820859 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1755695350, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Jan 26 16:57:52 compute-0 nova_compute[185389]: 2026-01-26 16:57:52.742 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:53 compute-0 nova_compute[185389]: 2026-01-26 16:57:53.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:53 compute-0 nova_compute[185389]: 2026-01-26 16:57:53.898 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:54 compute-0 nova_compute[185389]: 2026-01-26 16:57:54.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:55 compute-0 nova_compute[185389]: 2026-01-26 16:57:55.027 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:55 compute-0 podman[246523]: 2026-01-26 16:57:55.222687775 +0000 UTC m=+0.087207283 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:57:55 compute-0 nova_compute[185389]: 2026-01-26 16:57:55.610 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.467 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.468 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid a2578f61-3f19-40f4-a32f-97cf22569550 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.468 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.469 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.470 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.471 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.554 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:56 compute-0 nova_compute[185389]: 2026-01-26 16:57:56.555 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:57 compute-0 podman[246547]: 2026-01-26 16:57:57.216249416 +0000 UTC m=+0.080078254 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:57:57 compute-0 nova_compute[185389]: 2026-01-26 16:57:57.582 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:57 compute-0 nova_compute[185389]: 2026-01-26 16:57:57.583 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:57:58 compute-0 nova_compute[185389]: 2026-01-26 16:57:58.469 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:57:58 compute-0 nova_compute[185389]: 2026-01-26 16:57:58.469 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:57:58 compute-0 nova_compute[185389]: 2026-01-26 16:57:58.469 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:57:58 compute-0 nova_compute[185389]: 2026-01-26 16:57:58.902 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:59 compute-0 podman[201244]: time="2026-01-26T16:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:57:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:57:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 26 16:57:59 compute-0 nova_compute[185389]: 2026-01-26 16:57:59.968 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:57:59 compute-0 nova_compute[185389]: 2026-01-26 16:57:59.989 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:57:59 compute-0 nova_compute[185389]: 2026-01-26 16:57:59.990 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:57:59 compute-0 nova_compute[185389]: 2026-01-26 16:57:59.991 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:59 compute-0 nova_compute[185389]: 2026-01-26 16:57:59.991 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.030 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.043 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.043 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.044 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.044 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.148 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.219 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.221 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 podman[246567]: 2026-01-26 16:58:00.229362567 +0000 UTC m=+0.113304238 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:58:00 compute-0 podman[246566]: 2026-01-26 16:58:00.253821389 +0000 UTC m=+0.142514096 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:58:00 compute-0 podman[246568]: 2026-01-26 16:58:00.266678912 +0000 UTC m=+0.132017338 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.294 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.295 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.361 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.363 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.444 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.451 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.535 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.537 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.602 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.604 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.698 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.699 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:00 compute-0 nova_compute[185389]: 2026-01-26 16:58:00.762 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.124 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.126 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4851MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.126 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.126 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.316 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.317 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.317 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.318 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.391 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:58:01 compute-0 openstack_network_exporter[204387]: ERROR   16:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:58:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:58:01 compute-0 openstack_network_exporter[204387]: ERROR   16:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:58:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.578 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.578 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.598 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.627 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.716 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.735 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.736 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.737 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.737 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:01 compute-0 nova_compute[185389]: 2026-01-26 16:58:01.737 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:58:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:58:01.740 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:58:01.741 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:58:01.742 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:02 compute-0 nova_compute[185389]: 2026-01-26 16:58:02.884 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:03 compute-0 nova_compute[185389]: 2026-01-26 16:58:03.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:03 compute-0 nova_compute[185389]: 2026-01-26 16:58:03.905 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:05 compute-0 nova_compute[185389]: 2026-01-26 16:58:05.033 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:08 compute-0 nova_compute[185389]: 2026-01-26 16:58:08.911 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:09 compute-0 nova_compute[185389]: 2026-01-26 16:58:09.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:10 compute-0 nova_compute[185389]: 2026-01-26 16:58:10.036 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:13 compute-0 nova_compute[185389]: 2026-01-26 16:58:13.914 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:15 compute-0 nova_compute[185389]: 2026-01-26 16:58:15.039 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:18 compute-0 nova_compute[185389]: 2026-01-26 16:58:18.915 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:20 compute-0 nova_compute[185389]: 2026-01-26 16:58:20.042 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:23 compute-0 podman[246656]: 2026-01-26 16:58:23.206504604 +0000 UTC m=+0.080865194 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 16:58:23 compute-0 podman[246655]: 2026-01-26 16:58:23.230226247 +0000 UTC m=+0.101037692 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 26 16:58:23 compute-0 podman[246657]: 2026-01-26 16:58:23.236157274 +0000 UTC m=+0.089148325 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 16:58:23 compute-0 nova_compute[185389]: 2026-01-26 16:58:23.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:25 compute-0 nova_compute[185389]: 2026-01-26 16:58:25.045 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:26 compute-0 podman[246715]: 2026-01-26 16:58:26.229553641 +0000 UTC m=+0.105876020 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 16:58:28 compute-0 podman[246739]: 2026-01-26 16:58:28.232449491 +0000 UTC m=+0.097107677 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 26 16:58:28 compute-0 nova_compute[185389]: 2026-01-26 16:58:28.923 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:29 compute-0 podman[201244]: time="2026-01-26T16:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:58:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:58:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 16:58:30 compute-0 nova_compute[185389]: 2026-01-26 16:58:30.047 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:31 compute-0 podman[246759]: 2026-01-26 16:58:31.227201633 +0000 UTC m=+0.110672488 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, name=ubi9, config_id=kepler, architecture=x86_64)
Jan 26 16:58:31 compute-0 podman[246758]: 2026-01-26 16:58:31.250673418 +0000 UTC m=+0.137918893 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:58:31 compute-0 podman[246757]: 2026-01-26 16:58:31.268174314 +0000 UTC m=+0.160234078 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.345 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.345 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.352 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.355 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.355 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T16:58:31.356241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 openstack_network_exporter[204387]: ERROR   16:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:58:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:58:31 compute-0 openstack_network_exporter[204387]: ERROR   16:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:58:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.429 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.429 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.430 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.494 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.495 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.495 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.504 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.505 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.505 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.505 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.506 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.506 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.507 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T16:58:31.504526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.508 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.509 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.509 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.509 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.510 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T16:58:31.508186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T16:58:31.511452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.515 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.518 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T16:58:31.519645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.539 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 48470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.565 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 42870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.567 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T16:58:31.567217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T16:58:31.569038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T16:58:31.570433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.572 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T16:58:31.571853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.572 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T16:58:31.573292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T16:58:31.574371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.574 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T16:58:31.575649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.577 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T16:58:31.577149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.578 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T16:58:31.578717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.580 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.580 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T16:58:31.580183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T16:58:31.581746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.583 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T16:58:31.583167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.583 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T16:58:31.584468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.608 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.609 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.609 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.634 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.634 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.635 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T16:58:31.636231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.637 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T16:58:31.638636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.638 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.639 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.639 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.639 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.639 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.639 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.640 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T16:58:31.640800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.641 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.641 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.642 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.642 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T16:58:31.643087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T16:58:31.644575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.645 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.645 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T16:58:31.647123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.647 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.648 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.648 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.648 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.649 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 16:58:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 16:58:33 compute-0 nova_compute[185389]: 2026-01-26 16:58:33.925 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:35 compute-0 nova_compute[185389]: 2026-01-26 16:58:35.052 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:38 compute-0 nova_compute[185389]: 2026-01-26 16:58:38.929 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:40 compute-0 nova_compute[185389]: 2026-01-26 16:58:40.056 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:43 compute-0 nova_compute[185389]: 2026-01-26 16:58:43.930 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:45 compute-0 nova_compute[185389]: 2026-01-26 16:58:45.059 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:46 compute-0 nova_compute[185389]: 2026-01-26 16:58:46.756 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:46 compute-0 nova_compute[185389]: 2026-01-26 16:58:46.758 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:58:48 compute-0 nova_compute[185389]: 2026-01-26 16:58:48.935 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:50 compute-0 nova_compute[185389]: 2026-01-26 16:58:50.062 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:50 compute-0 nova_compute[185389]: 2026-01-26 16:58:50.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:53 compute-0 nova_compute[185389]: 2026-01-26 16:58:53.939 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:54 compute-0 podman[246826]: 2026-01-26 16:58:54.213781868 +0000 UTC m=+0.083648769 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 16:58:54 compute-0 podman[246824]: 2026-01-26 16:58:54.216236433 +0000 UTC m=+0.086353750 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64)
Jan 26 16:58:54 compute-0 podman[246825]: 2026-01-26 16:58:54.237670273 +0000 UTC m=+0.107995456 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 26 16:58:54 compute-0 nova_compute[185389]: 2026-01-26 16:58:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:54 compute-0 nova_compute[185389]: 2026-01-26 16:58:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:55 compute-0 nova_compute[185389]: 2026-01-26 16:58:55.065 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:55 compute-0 nova_compute[185389]: 2026-01-26 16:58:55.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:56 compute-0 nova_compute[185389]: 2026-01-26 16:58:56.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:56 compute-0 nova_compute[185389]: 2026-01-26 16:58:56.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:58:56 compute-0 nova_compute[185389]: 2026-01-26 16:58:56.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:58:57 compute-0 podman[246883]: 2026-01-26 16:58:57.210163985 +0000 UTC m=+0.099041468 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:58:57 compute-0 nova_compute[185389]: 2026-01-26 16:58:57.768 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:58:57 compute-0 nova_compute[185389]: 2026-01-26 16:58:57.769 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:58:57 compute-0 nova_compute[185389]: 2026-01-26 16:58:57.770 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:58:57 compute-0 nova_compute[185389]: 2026-01-26 16:58:57.770 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:58:58 compute-0 nova_compute[185389]: 2026-01-26 16:58:58.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:59 compute-0 podman[246906]: 2026-01-26 16:58:59.18810536 +0000 UTC m=+0.072188884 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 26 16:58:59 compute-0 podman[201244]: time="2026-01-26T16:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:58:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:58:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.067 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.106 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.298 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.299 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.299 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.333 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.334 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.335 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.335 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.491 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.553 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.554 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.614 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.615 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.685 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.687 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.751 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.759 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.820 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.822 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.883 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.885 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.950 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:00 compute-0 nova_compute[185389]: 2026-01-26 16:59:00.952 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.011 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.363 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.364 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4846MB free_disk=72.40018844604492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.364 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.365 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:01 compute-0 openstack_network_exporter[204387]: ERROR   16:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:59:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:59:01 compute-0 openstack_network_exporter[204387]: ERROR   16:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:59:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.591 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.592 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.592 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.592 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.650 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.669 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.672 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:59:01 compute-0 nova_compute[185389]: 2026-01-26 16:59:01.673 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:59:01.741 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:59:01.743 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 16:59:01.744 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:02 compute-0 nova_compute[185389]: 2026-01-26 16:59:02.093 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:02 compute-0 nova_compute[185389]: 2026-01-26 16:59:02.093 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:02 compute-0 podman[246948]: 2026-01-26 16:59:02.220389572 +0000 UTC m=+0.101247108 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:59:02 compute-0 podman[246949]: 2026-01-26 16:59:02.24360742 +0000 UTC m=+0.118348503 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Jan 26 16:59:02 compute-0 podman[246947]: 2026-01-26 16:59:02.26426393 +0000 UTC m=+0.144803657 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:59:03 compute-0 nova_compute[185389]: 2026-01-26 16:59:03.946 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:05 compute-0 nova_compute[185389]: 2026-01-26 16:59:05.069 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:08 compute-0 nova_compute[185389]: 2026-01-26 16:59:08.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:10 compute-0 nova_compute[185389]: 2026-01-26 16:59:10.070 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:13 compute-0 nova_compute[185389]: 2026-01-26 16:59:13.955 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:15 compute-0 nova_compute[185389]: 2026-01-26 16:59:15.072 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:18 compute-0 nova_compute[185389]: 2026-01-26 16:59:18.957 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:20 compute-0 nova_compute[185389]: 2026-01-26 16:59:20.075 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:23 compute-0 nova_compute[185389]: 2026-01-26 16:59:23.961 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:25 compute-0 nova_compute[185389]: 2026-01-26 16:59:25.077 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:25 compute-0 podman[247011]: 2026-01-26 16:59:25.209986727 +0000 UTC m=+0.080950746 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 16:59:25 compute-0 podman[247010]: 2026-01-26 16:59:25.218066483 +0000 UTC m=+0.102651444 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 16:59:25 compute-0 podman[247009]: 2026-01-26 16:59:25.21908124 +0000 UTC m=+0.101736101 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=)
Jan 26 16:59:28 compute-0 podman[247070]: 2026-01-26 16:59:28.204452954 +0000 UTC m=+0.087628264 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 16:59:28 compute-0 nova_compute[185389]: 2026-01-26 16:59:28.967 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:29 compute-0 podman[201244]: time="2026-01-26T16:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:59:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:59:29 compute-0 podman[201244]: @ - - [26/Jan/2026:16:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 26 16:59:30 compute-0 nova_compute[185389]: 2026-01-26 16:59:30.081 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:30 compute-0 podman[247093]: 2026-01-26 16:59:30.217080693 +0000 UTC m=+0.101157735 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:59:31 compute-0 openstack_network_exporter[204387]: ERROR   16:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 16:59:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:59:31 compute-0 openstack_network_exporter[204387]: ERROR   16:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 16:59:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 16:59:33 compute-0 podman[247110]: 2026-01-26 16:59:33.238756424 +0000 UTC m=+0.107061702 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:59:33 compute-0 podman[247111]: 2026-01-26 16:59:33.25701996 +0000 UTC m=+0.124815015 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, container_name=kepler)
Jan 26 16:59:33 compute-0 podman[247109]: 2026-01-26 16:59:33.282417147 +0000 UTC m=+0.158924364 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:59:33 compute-0 nova_compute[185389]: 2026-01-26 16:59:33.973 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:35 compute-0 nova_compute[185389]: 2026-01-26 16:59:35.084 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:38 compute-0 nova_compute[185389]: 2026-01-26 16:59:38.976 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:40 compute-0 nova_compute[185389]: 2026-01-26 16:59:40.087 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:43 compute-0 nova_compute[185389]: 2026-01-26 16:59:43.979 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:45 compute-0 nova_compute[185389]: 2026-01-26 16:59:45.092 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:46 compute-0 nova_compute[185389]: 2026-01-26 16:59:46.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:46 compute-0 nova_compute[185389]: 2026-01-26 16:59:46.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:59:48 compute-0 nova_compute[185389]: 2026-01-26 16:59:48.984 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:50 compute-0 nova_compute[185389]: 2026-01-26 16:59:50.096 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:50 compute-0 nova_compute[185389]: 2026-01-26 16:59:50.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:53 compute-0 nova_compute[185389]: 2026-01-26 16:59:53.989 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:54 compute-0 nova_compute[185389]: 2026-01-26 16:59:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:54 compute-0 nova_compute[185389]: 2026-01-26 16:59:54.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:55 compute-0 nova_compute[185389]: 2026-01-26 16:59:55.099 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:56 compute-0 podman[247175]: 2026-01-26 16:59:56.210360385 +0000 UTC m=+0.075997706 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 16:59:56 compute-0 podman[247173]: 2026-01-26 16:59:56.229836563 +0000 UTC m=+0.097427785 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Jan 26 16:59:56 compute-0 podman[247174]: 2026-01-26 16:59:56.253511124 +0000 UTC m=+0.117694625 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 26 16:59:56 compute-0 nova_compute[185389]: 2026-01-26 16:59:56.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:56 compute-0 nova_compute[185389]: 2026-01-26 16:59:56.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:59:57 compute-0 nova_compute[185389]: 2026-01-26 16:59:57.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:59:57 compute-0 nova_compute[185389]: 2026-01-26 16:59:57.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:59:57 compute-0 nova_compute[185389]: 2026-01-26 16:59:57.786 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:59:58 compute-0 nova_compute[185389]: 2026-01-26 16:59:58.994 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:59 compute-0 podman[247233]: 2026-01-26 16:59:59.24080046 +0000 UTC m=+0.116510323 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 16:59:59 compute-0 podman[201244]: time="2026-01-26T16:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 16:59:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 16:59:59 compute-0 podman[201244]: @ - - [26/Jan/2026:16:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 26 17:00:00 compute-0 nova_compute[185389]: 2026-01-26 17:00:00.104 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:01 compute-0 podman[247257]: 2026-01-26 17:00:01.216696811 +0000 UTC m=+0.093914832 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:00:01 compute-0 openstack_network_exporter[204387]: ERROR   17:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:00:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:00:01 compute-0 openstack_network_exporter[204387]: ERROR   17:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:00:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:00:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:00:01.744 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:00:01.744 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:00:01.745 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.092 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.109 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.110 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.111 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.111 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.112 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.142 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.143 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.144 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.145 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.262 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.353 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.355 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.421 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.423 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.487 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.489 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.568 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.580 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.662 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.663 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.755 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.757 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.858 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.861 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:02 compute-0 nova_compute[185389]: 2026-01-26 17:00:02.959 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.403 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.405 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.40092086791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.405 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.406 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.496 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.497 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.497 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.498 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.609 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.634 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.638 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.639 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:03 compute-0 nova_compute[185389]: 2026-01-26 17:00:03.999 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:04 compute-0 podman[247301]: 2026-01-26 17:00:04.248835448 +0000 UTC m=+0.123460819 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Jan 26 17:00:04 compute-0 podman[247302]: 2026-01-26 17:00:04.264475175 +0000 UTC m=+0.131002849 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, release-0.7.12=)
Jan 26 17:00:04 compute-0 podman[247300]: 2026-01-26 17:00:04.28382393 +0000 UTC m=+0.161997644 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 26 17:00:05 compute-0 nova_compute[185389]: 2026-01-26 17:00:05.106 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:07 compute-0 nova_compute[185389]: 2026-01-26 17:00:07.634 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:07 compute-0 nova_compute[185389]: 2026-01-26 17:00:07.635 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:09 compute-0 nova_compute[185389]: 2026-01-26 17:00:09.004 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:10 compute-0 nova_compute[185389]: 2026-01-26 17:00:10.108 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:14 compute-0 nova_compute[185389]: 2026-01-26 17:00:14.008 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:15 compute-0 nova_compute[185389]: 2026-01-26 17:00:15.113 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:19 compute-0 nova_compute[185389]: 2026-01-26 17:00:19.013 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:20 compute-0 nova_compute[185389]: 2026-01-26 17:00:20.116 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:24 compute-0 nova_compute[185389]: 2026-01-26 17:00:24.019 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:25 compute-0 nova_compute[185389]: 2026-01-26 17:00:25.120 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:27 compute-0 podman[247365]: 2026-01-26 17:00:27.239941531 +0000 UTC m=+0.108411658 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 26 17:00:27 compute-0 podman[247366]: 2026-01-26 17:00:27.241840222 +0000 UTC m=+0.100522138 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:00:27 compute-0 podman[247364]: 2026-01-26 17:00:27.253374169 +0000 UTC m=+0.123241933 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git)
Jan 26 17:00:29 compute-0 nova_compute[185389]: 2026-01-26 17:00:29.024 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:29 compute-0 podman[201244]: time="2026-01-26T17:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:00:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:00:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 17:00:30 compute-0 nova_compute[185389]: 2026-01-26 17:00:30.123 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:30 compute-0 podman[247426]: 2026-01-26 17:00:30.211257339 +0000 UTC m=+0.082450827 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.346 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.346 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d0f4a810>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.359 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.365 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:00:31.367390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 openstack_network_exporter[204387]: ERROR   17:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:00:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:00:31 compute-0 openstack_network_exporter[204387]: ERROR   17:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:00:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.472 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.472 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.473 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.564 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.565 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.566 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:00:31.565553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.566 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.566 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.568 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.569 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.569 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.570 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.570 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:00:31.568654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:00:31.571857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.582 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:00:31.583874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.613 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 50060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.654 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 44540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.656 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:00:31.656342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:00:31.658763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:00:31.660651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.660 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.662 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.664 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.665 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:00:31.664848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.666 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.667 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:00:31.668158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.671 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:00:31.670732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.672 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:00:31.674190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.675 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.676 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.677 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:00:31.677923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.678 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.680 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:00:31.680827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.684 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:00:31.683670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:00:31.685761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.686 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.686 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:00:31.687374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.687 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:00:31.688923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.720 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.720 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.721 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.753 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.754 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.754 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.756 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.756 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.757 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.757 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.758 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.758 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.758 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.760 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.760 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.761 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:00:31.756583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:00:31.760622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.761 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.762 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.762 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.762 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.763 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.764 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.765 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.765 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.765 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.766 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:00:31.764475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.766 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.766 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.767 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.767 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.768 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.768 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.768 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:00:31.768257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.769 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:00:31.770219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.771 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.771 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.772 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.772 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.774 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:00:31.773639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.774 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.774 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.775 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.775 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.775 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.776 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:00:31.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:00:32 compute-0 podman[247450]: 2026-01-26 17:00:32.194046842 +0000 UTC m=+0.074938567 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:34 compute-0 nova_compute[185389]: 2026-01-26 17:00:34.030 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:35 compute-0 nova_compute[185389]: 2026-01-26 17:00:35.125 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:35 compute-0 podman[247472]: 2026-01-26 17:00:35.205457686 +0000 UTC m=+0.080915165 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543)
Jan 26 17:00:35 compute-0 podman[247471]: 2026-01-26 17:00:35.224894974 +0000 UTC m=+0.104591916 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:00:35 compute-0 podman[247470]: 2026-01-26 17:00:35.237297855 +0000 UTC m=+0.121149087 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:00:39 compute-0 nova_compute[185389]: 2026-01-26 17:00:39.032 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:40 compute-0 nova_compute[185389]: 2026-01-26 17:00:40.129 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:44 compute-0 nova_compute[185389]: 2026-01-26 17:00:44.036 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:45 compute-0 nova_compute[185389]: 2026-01-26 17:00:45.132 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:47 compute-0 nova_compute[185389]: 2026-01-26 17:00:47.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:47 compute-0 nova_compute[185389]: 2026-01-26 17:00:47.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:00:49 compute-0 nova_compute[185389]: 2026-01-26 17:00:49.042 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:50 compute-0 nova_compute[185389]: 2026-01-26 17:00:50.135 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:52 compute-0 nova_compute[185389]: 2026-01-26 17:00:52.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:54 compute-0 nova_compute[185389]: 2026-01-26 17:00:54.045 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:54 compute-0 nova_compute[185389]: 2026-01-26 17:00:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:54 compute-0 nova_compute[185389]: 2026-01-26 17:00:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:55 compute-0 nova_compute[185389]: 2026-01-26 17:00:55.139 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.947 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.947 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.947 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:00:56 compute-0 nova_compute[185389]: 2026-01-26 17:00:56.948 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:00:58 compute-0 podman[247534]: 2026-01-26 17:00:58.221639805 +0000 UTC m=+0.094124737 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 17:00:58 compute-0 podman[247535]: 2026-01-26 17:00:58.265355629 +0000 UTC m=+0.126750866 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:00:58 compute-0 podman[247536]: 2026-01-26 17:00:58.265701388 +0000 UTC m=+0.118089745 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:00:59 compute-0 nova_compute[185389]: 2026-01-26 17:00:59.050 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:59 compute-0 podman[201244]: time="2026-01-26T17:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:00:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:00:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 26 17:01:00 compute-0 nova_compute[185389]: 2026-01-26 17:01:00.142 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:01 compute-0 nova_compute[185389]: 2026-01-26 17:01:01.157 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:01:01 compute-0 nova_compute[185389]: 2026-01-26 17:01:01.178 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:01:01 compute-0 nova_compute[185389]: 2026-01-26 17:01:01.178 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:01:01 compute-0 nova_compute[185389]: 2026-01-26 17:01:01.179 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:01 compute-0 podman[247595]: 2026-01-26 17:01:01.215474142 +0000 UTC m=+0.090651235 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:01:01 compute-0 openstack_network_exporter[204387]: ERROR   17:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:01:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:01:01 compute-0 openstack_network_exporter[204387]: ERROR   17:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:01:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:01:01 compute-0 CROND[247617]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 17:01:01 compute-0 run-parts[247620]: (/etc/cron.hourly) starting 0anacron
Jan 26 17:01:01 compute-0 run-parts[247626]: (/etc/cron.hourly) finished 0anacron
Jan 26 17:01:01 compute-0 CROND[247616]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 17:01:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:01:01.746 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:01:01.746 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:01:01.747 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:03 compute-0 podman[247627]: 2026-01-26 17:01:03.231836908 +0000 UTC m=+0.107105863 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.746 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.746 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.848 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.938 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:03 compute-0 nova_compute[185389]: 2026-01-26 17:01:03.940 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.042 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.044 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.067 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.121 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.122 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.190 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.202 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.282 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.284 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.366 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.367 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.463 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.464 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:04 compute-0 nova_compute[185389]: 2026-01-26 17:01:04.558 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.112 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.113 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.40092086791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.114 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.114 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.145 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.248 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.249 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.250 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.251 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.389 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.405 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.410 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:01:05 compute-0 nova_compute[185389]: 2026-01-26 17:01:05.410 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:06 compute-0 podman[247670]: 2026-01-26 17:01:06.234323727 +0000 UTC m=+0.109427525 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:01:06 compute-0 podman[247671]: 2026-01-26 17:01:06.277311952 +0000 UTC m=+0.136605739 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 17:01:06 compute-0 podman[247669]: 2026-01-26 17:01:06.281665838 +0000 UTC m=+0.153554790 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 17:01:09 compute-0 nova_compute[185389]: 2026-01-26 17:01:09.071 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:10 compute-0 nova_compute[185389]: 2026-01-26 17:01:10.150 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:14 compute-0 nova_compute[185389]: 2026-01-26 17:01:14.074 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:15 compute-0 nova_compute[185389]: 2026-01-26 17:01:15.152 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:19 compute-0 nova_compute[185389]: 2026-01-26 17:01:19.076 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:20 compute-0 nova_compute[185389]: 2026-01-26 17:01:20.155 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:24 compute-0 nova_compute[185389]: 2026-01-26 17:01:24.079 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:25 compute-0 nova_compute[185389]: 2026-01-26 17:01:25.158 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:29 compute-0 nova_compute[185389]: 2026-01-26 17:01:29.083 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:29 compute-0 podman[247730]: 2026-01-26 17:01:29.252829397 +0000 UTC m=+0.110591317 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:01:29 compute-0 podman[247731]: 2026-01-26 17:01:29.279585869 +0000 UTC m=+0.113218456 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:01:29 compute-0 podman[247729]: 2026-01-26 17:01:29.287012357 +0000 UTC m=+0.139709922 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:01:29 compute-0 podman[201244]: time="2026-01-26T17:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:01:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:01:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 26 17:01:30 compute-0 nova_compute[185389]: 2026-01-26 17:01:30.161 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:31 compute-0 openstack_network_exporter[204387]: ERROR   17:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:01:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:01:31 compute-0 openstack_network_exporter[204387]: ERROR   17:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:01:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:01:32 compute-0 podman[247792]: 2026-01-26 17:01:32.240753642 +0000 UTC m=+0.119501929 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:01:34 compute-0 nova_compute[185389]: 2026-01-26 17:01:34.086 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:34 compute-0 podman[247816]: 2026-01-26 17:01:34.182346347 +0000 UTC m=+0.071027872 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 17:01:35 compute-0 nova_compute[185389]: 2026-01-26 17:01:35.162 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:37 compute-0 podman[247836]: 2026-01-26 17:01:37.208362888 +0000 UTC m=+0.085450633 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:01:37 compute-0 podman[247835]: 2026-01-26 17:01:37.224287421 +0000 UTC m=+0.111920863 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:01:37 compute-0 podman[247840]: 2026-01-26 17:01:37.234128839 +0000 UTC m=+0.097217994 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, version=9.4, name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:01:39 compute-0 nova_compute[185389]: 2026-01-26 17:01:39.088 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:40 compute-0 nova_compute[185389]: 2026-01-26 17:01:40.165 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:44 compute-0 nova_compute[185389]: 2026-01-26 17:01:44.092 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:45 compute-0 nova_compute[185389]: 2026-01-26 17:01:45.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:49 compute-0 nova_compute[185389]: 2026-01-26 17:01:49.097 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:50 compute-0 nova_compute[185389]: 2026-01-26 17:01:50.171 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:51 compute-0 nova_compute[185389]: 2026-01-26 17:01:51.411 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:51 compute-0 nova_compute[185389]: 2026-01-26 17:01:51.412 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:01:53 compute-0 nova_compute[185389]: 2026-01-26 17:01:53.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:54 compute-0 nova_compute[185389]: 2026-01-26 17:01:54.099 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:54 compute-0 nova_compute[185389]: 2026-01-26 17:01:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:55 compute-0 nova_compute[185389]: 2026-01-26 17:01:55.175 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:56 compute-0 nova_compute[185389]: 2026-01-26 17:01:56.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:58 compute-0 nova_compute[185389]: 2026-01-26 17:01:58.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:58 compute-0 nova_compute[185389]: 2026-01-26 17:01:58.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:01:59 compute-0 nova_compute[185389]: 2026-01-26 17:01:59.101 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:59 compute-0 nova_compute[185389]: 2026-01-26 17:01:59.291 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:01:59 compute-0 nova_compute[185389]: 2026-01-26 17:01:59.292 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:01:59 compute-0 nova_compute[185389]: 2026-01-26 17:01:59.292 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:01:59 compute-0 podman[201244]: time="2026-01-26T17:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:01:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:01:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 26 17:02:00 compute-0 nova_compute[185389]: 2026-01-26 17:02:00.177 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:00 compute-0 podman[247903]: 2026-01-26 17:02:00.201262139 +0000 UTC m=+0.077425488 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:02:00 compute-0 podman[247902]: 2026-01-26 17:02:00.203198782 +0000 UTC m=+0.084845409 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 26 17:02:00 compute-0 podman[247904]: 2026-01-26 17:02:00.217593874 +0000 UTC m=+0.089048965 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:02:01 compute-0 openstack_network_exporter[204387]: ERROR   17:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:02:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:02:01 compute-0 openstack_network_exporter[204387]: ERROR   17:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:02:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:02:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:02:01.747 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:02:01.748 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:02:01.749 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:03 compute-0 podman[247963]: 2026-01-26 17:02:03.219628959 +0000 UTC m=+0.099596522 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:02:03 compute-0 nova_compute[185389]: 2026-01-26 17:02:03.313 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:02:03 compute-0 nova_compute[185389]: 2026-01-26 17:02:03.977 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:02:03 compute-0 nova_compute[185389]: 2026-01-26 17:02:03.977 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:02:03 compute-0 nova_compute[185389]: 2026-01-26 17:02:03.977 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:03 compute-0 nova_compute[185389]: 2026-01-26 17:02:03.978 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.032 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.033 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.034 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.034 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.102 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.143 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.252 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.255 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.321 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.322 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.389 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.391 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.480 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.489 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.551 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.553 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.620 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.622 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.706 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.708 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:04 compute-0 nova_compute[185389]: 2026-01-26 17:02:04.778 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.132 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.133 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=72.39701461791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.133 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.134 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.180 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:05 compute-0 podman[248010]: 2026-01-26 17:02:05.2281828 +0000 UTC m=+0.113102829 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.419 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.421 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.422 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.423 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.519 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.887 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.889 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:02:05 compute-0 nova_compute[185389]: 2026-01-26 17:02:05.889 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:07 compute-0 nova_compute[185389]: 2026-01-26 17:02:07.631 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:07 compute-0 nova_compute[185389]: 2026-01-26 17:02:07.632 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:07 compute-0 nova_compute[185389]: 2026-01-26 17:02:07.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:08 compute-0 podman[248030]: 2026-01-26 17:02:08.248686577 +0000 UTC m=+0.119927985 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_id=kepler, name=ubi9)
Jan 26 17:02:08 compute-0 podman[248029]: 2026-01-26 17:02:08.264837836 +0000 UTC m=+0.139171438 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:02:08 compute-0 podman[248028]: 2026-01-26 17:02:08.281809298 +0000 UTC m=+0.160239942 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:02:09 compute-0 nova_compute[185389]: 2026-01-26 17:02:09.105 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:10 compute-0 nova_compute[185389]: 2026-01-26 17:02:10.183 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:14 compute-0 nova_compute[185389]: 2026-01-26 17:02:14.107 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:15 compute-0 nova_compute[185389]: 2026-01-26 17:02:15.187 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:19 compute-0 nova_compute[185389]: 2026-01-26 17:02:19.110 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:20 compute-0 nova_compute[185389]: 2026-01-26 17:02:20.190 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:24 compute-0 nova_compute[185389]: 2026-01-26 17:02:24.114 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:25 compute-0 nova_compute[185389]: 2026-01-26 17:02:25.192 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:29 compute-0 nova_compute[185389]: 2026-01-26 17:02:29.118 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:29 compute-0 podman[201244]: time="2026-01-26T17:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:02:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:02:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:02:30 compute-0 nova_compute[185389]: 2026-01-26 17:02:30.194 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:31 compute-0 podman[248093]: 2026-01-26 17:02:31.244418301 +0000 UTC m=+0.088712524 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:02:31 compute-0 podman[248092]: 2026-01-26 17:02:31.247203837 +0000 UTC m=+0.087585114 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2)
Jan 26 17:02:31 compute-0 podman[248091]: 2026-01-26 17:02:31.249922591 +0000 UTC m=+0.094991335 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.346 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.347 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.355 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.360 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:02:31.360835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 openstack_network_exporter[204387]: ERROR   17:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:02:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:02:31 compute-0 openstack_network_exporter[204387]: ERROR   17:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:02:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.446 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.446 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.446 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.527 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.528 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.528 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.530 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:02:31.530328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.531 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.531 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.532 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.532 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.533 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.535 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.536 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:02:31.535363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.536 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.537 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.537 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.538 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:02:31.540863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.546 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.551 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:02:31.553102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 51820000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.607 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 46220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.608 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.609 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:02:31.608680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.610 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.610 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:02:31.610536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.612 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.612 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.613 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.615 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.616 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.616 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:02:31.611996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:02:31.613543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.617 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:02:31.614905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:02:31.615791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:02:31.617486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:02:31.619101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:02:31.620300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.621 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:02:31.621501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:02:31.622786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:02:31.624000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.625 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:02:31.625297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.676 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.678 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:02:31.678613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.682 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.682 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.683 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:02:31.681033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.684 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:02:31.684447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.685 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:02:31.686564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.688 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.688 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:02:31.687886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.688 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.688 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.689 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.689 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:02:31.690251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.690 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.691 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.691 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.691 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:02:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:02:34 compute-0 nova_compute[185389]: 2026-01-26 17:02:34.121 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:34 compute-0 podman[248153]: 2026-01-26 17:02:34.23045354 +0000 UTC m=+0.108330037 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:02:35 compute-0 nova_compute[185389]: 2026-01-26 17:02:35.198 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:36 compute-0 podman[248177]: 2026-01-26 17:02:36.290849624 +0000 UTC m=+0.157352682 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:02:39 compute-0 nova_compute[185389]: 2026-01-26 17:02:39.126 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:39 compute-0 podman[248197]: 2026-01-26 17:02:39.217514967 +0000 UTC m=+0.097291369 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 26 17:02:39 compute-0 podman[248198]: 2026-01-26 17:02:39.2294058 +0000 UTC m=+0.102567072 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 17:02:39 compute-0 podman[248196]: 2026-01-26 17:02:39.273842869 +0000 UTC m=+0.148466981 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 17:02:40 compute-0 nova_compute[185389]: 2026-01-26 17:02:40.202 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:44 compute-0 nova_compute[185389]: 2026-01-26 17:02:44.130 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:45 compute-0 nova_compute[185389]: 2026-01-26 17:02:45.203 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:49 compute-0 nova_compute[185389]: 2026-01-26 17:02:49.135 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:50 compute-0 nova_compute[185389]: 2026-01-26 17:02:50.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:50 compute-0 nova_compute[185389]: 2026-01-26 17:02:50.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:50 compute-0 nova_compute[185389]: 2026-01-26 17:02:50.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:02:52 compute-0 nova_compute[185389]: 2026-01-26 17:02:52.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:52 compute-0 nova_compute[185389]: 2026-01-26 17:02:52.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:02:52 compute-0 nova_compute[185389]: 2026-01-26 17:02:52.745 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:02:54 compute-0 nova_compute[185389]: 2026-01-26 17:02:54.139 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:55 compute-0 nova_compute[185389]: 2026-01-26 17:02:55.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:55 compute-0 nova_compute[185389]: 2026-01-26 17:02:55.746 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:55 compute-0 nova_compute[185389]: 2026-01-26 17:02:55.746 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:56 compute-0 nova_compute[185389]: 2026-01-26 17:02:56.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:58 compute-0 nova_compute[185389]: 2026-01-26 17:02:58.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:58 compute-0 nova_compute[185389]: 2026-01-26 17:02:58.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:02:58 compute-0 nova_compute[185389]: 2026-01-26 17:02:58.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:02:59 compute-0 nova_compute[185389]: 2026-01-26 17:02:59.145 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:59 compute-0 podman[201244]: time="2026-01-26T17:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:02:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:02:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 26 17:02:59 compute-0 nova_compute[185389]: 2026-01-26 17:02:59.845 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:02:59 compute-0 nova_compute[185389]: 2026-01-26 17:02:59.846 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:02:59 compute-0 nova_compute[185389]: 2026-01-26 17:02:59.847 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:02:59 compute-0 nova_compute[185389]: 2026-01-26 17:02:59.847 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:03:00 compute-0 nova_compute[185389]: 2026-01-26 17:03:00.210 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:01 compute-0 openstack_network_exporter[204387]: ERROR   17:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:03:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:03:01 compute-0 openstack_network_exporter[204387]: ERROR   17:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:03:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:03:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:03:01.749 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:03:01.752 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:03:01.753 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:02 compute-0 podman[248261]: 2026-01-26 17:03:02.219146463 +0000 UTC m=+0.096339042 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Jan 26 17:03:02 compute-0 podman[248262]: 2026-01-26 17:03:02.250637111 +0000 UTC m=+0.108906814 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260120)
Jan 26 17:03:02 compute-0 podman[248268]: 2026-01-26 17:03:02.291543013 +0000 UTC m=+0.138541850 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:03:03 compute-0 nova_compute[185389]: 2026-01-26 17:03:03.026 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:03:03 compute-0 nova_compute[185389]: 2026-01-26 17:03:03.047 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:03:03 compute-0 nova_compute[185389]: 2026-01-26 17:03:03.047 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:03:03 compute-0 nova_compute[185389]: 2026-01-26 17:03:03.048 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.755 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.755 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.756 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.852 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.948 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:04 compute-0 nova_compute[185389]: 2026-01-26 17:03:04.950 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.019 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.020 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.078 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.079 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.143 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.150 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.212 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.220 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.222 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 podman[248333]: 2026-01-26 17:03:05.225823717 +0000 UTC m=+0.118820114 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.295 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.297 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.365 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.366 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.440 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.872 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.874 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.39701461791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.874 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:05 compute-0 nova_compute[185389]: 2026-01-26 17:03:05.875 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.037 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.038 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.038 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.039 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.113 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.178 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.179 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.197 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.226 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.290 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.307 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.309 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.309 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.310 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:06 compute-0 nova_compute[185389]: 2026-01-26 17:03:06.310 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:03:07 compute-0 podman[248372]: 2026-01-26 17:03:07.216199766 +0000 UTC m=+0.110667982 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:03:07 compute-0 nova_compute[185389]: 2026-01-26 17:03:07.320 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:07 compute-0 nova_compute[185389]: 2026-01-26 17:03:07.320 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:09 compute-0 nova_compute[185389]: 2026-01-26 17:03:09.155 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:10 compute-0 nova_compute[185389]: 2026-01-26 17:03:10.214 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:10 compute-0 podman[248393]: 2026-01-26 17:03:10.250570449 +0000 UTC m=+0.102353955 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 26 17:03:10 compute-0 podman[248392]: 2026-01-26 17:03:10.262208487 +0000 UTC m=+0.122497635 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:03:10 compute-0 podman[248391]: 2026-01-26 17:03:10.274142081 +0000 UTC m=+0.140744890 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:03:14 compute-0 nova_compute[185389]: 2026-01-26 17:03:14.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:15 compute-0 nova_compute[185389]: 2026-01-26 17:03:15.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:16 compute-0 nova_compute[185389]: 2026-01-26 17:03:16.725 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:19 compute-0 nova_compute[185389]: 2026-01-26 17:03:19.166 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:20 compute-0 nova_compute[185389]: 2026-01-26 17:03:20.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:24 compute-0 nova_compute[185389]: 2026-01-26 17:03:24.170 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:25 compute-0 nova_compute[185389]: 2026-01-26 17:03:25.222 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:29 compute-0 nova_compute[185389]: 2026-01-26 17:03:29.172 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:29 compute-0 podman[201244]: time="2026-01-26T17:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:03:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:03:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:03:30 compute-0 nova_compute[185389]: 2026-01-26 17:03:30.226 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:31 compute-0 openstack_network_exporter[204387]: ERROR   17:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:03:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:03:31 compute-0 openstack_network_exporter[204387]: ERROR   17:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:03:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:03:33 compute-0 podman[248458]: 2026-01-26 17:03:33.283886015 +0000 UTC m=+0.128052134 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:03:33 compute-0 podman[248456]: 2026-01-26 17:03:33.289819048 +0000 UTC m=+0.132376404 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git)
Jan 26 17:03:33 compute-0 podman[248457]: 2026-01-26 17:03:33.290711422 +0000 UTC m=+0.135060896 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 26 17:03:34 compute-0 nova_compute[185389]: 2026-01-26 17:03:34.176 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:35 compute-0 nova_compute[185389]: 2026-01-26 17:03:35.230 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:36 compute-0 podman[248515]: 2026-01-26 17:03:36.216431177 +0000 UTC m=+0.092535329 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:03:38 compute-0 podman[248539]: 2026-01-26 17:03:38.224670749 +0000 UTC m=+0.109439029 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:03:39 compute-0 nova_compute[185389]: 2026-01-26 17:03:39.180 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:40 compute-0 nova_compute[185389]: 2026-01-26 17:03:40.233 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:41 compute-0 podman[248558]: 2026-01-26 17:03:41.209023343 +0000 UTC m=+0.085837595 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, name=ubi9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 26 17:03:41 compute-0 podman[248557]: 2026-01-26 17:03:41.233593412 +0000 UTC m=+0.102206381 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 17:03:41 compute-0 podman[248556]: 2026-01-26 17:03:41.238102695 +0000 UTC m=+0.120891710 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:03:44 compute-0 nova_compute[185389]: 2026-01-26 17:03:44.184 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:45 compute-0 nova_compute[185389]: 2026-01-26 17:03:45.235 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:49 compute-0 nova_compute[185389]: 2026-01-26 17:03:49.189 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:50 compute-0 nova_compute[185389]: 2026-01-26 17:03:50.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:52 compute-0 nova_compute[185389]: 2026-01-26 17:03:52.738 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:52 compute-0 nova_compute[185389]: 2026-01-26 17:03:52.739 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:03:54 compute-0 nova_compute[185389]: 2026-01-26 17:03:54.193 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:55 compute-0 nova_compute[185389]: 2026-01-26 17:03:55.241 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:56 compute-0 nova_compute[185389]: 2026-01-26 17:03:56.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:57 compute-0 nova_compute[185389]: 2026-01-26 17:03:57.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:58 compute-0 nova_compute[185389]: 2026-01-26 17:03:58.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:58 compute-0 nova_compute[185389]: 2026-01-26 17:03:58.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:03:58 compute-0 nova_compute[185389]: 2026-01-26 17:03:58.943 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:03:58 compute-0 nova_compute[185389]: 2026-01-26 17:03:58.944 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:03:58 compute-0 nova_compute[185389]: 2026-01-26 17:03:58.946 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:03:59 compute-0 nova_compute[185389]: 2026-01-26 17:03:59.198 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:59 compute-0 podman[201244]: time="2026-01-26T17:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:03:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:03:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 26 17:04:00 compute-0 nova_compute[185389]: 2026-01-26 17:04:00.016 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:04:00 compute-0 nova_compute[185389]: 2026-01-26 17:04:00.039 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:04:00 compute-0 nova_compute[185389]: 2026-01-26 17:04:00.039 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:04:00 compute-0 nova_compute[185389]: 2026-01-26 17:04:00.039 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:00 compute-0 nova_compute[185389]: 2026-01-26 17:04:00.244 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:01 compute-0 openstack_network_exporter[204387]: ERROR   17:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:04:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:04:01 compute-0 openstack_network_exporter[204387]: ERROR   17:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:04:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:04:01 compute-0 anacron[31011]: Job `cron.monthly' started
Jan 26 17:04:01 compute-0 anacron[31011]: Job `cron.monthly' terminated
Jan 26 17:04:01 compute-0 anacron[31011]: Normal exit (3 jobs run)
Jan 26 17:04:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:04:01.751 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:04:01.752 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:04:01.754 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:03 compute-0 nova_compute[185389]: 2026-01-26 17:04:03.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:04 compute-0 nova_compute[185389]: 2026-01-26 17:04:04.203 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:04 compute-0 podman[248622]: 2026-01-26 17:04:04.214835753 +0000 UTC m=+0.092459427 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:04:04 compute-0 podman[248620]: 2026-01-26 17:04:04.256326122 +0000 UTC m=+0.130850612 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, distribution-scope=public, architecture=x86_64)
Jan 26 17:04:04 compute-0 podman[248621]: 2026-01-26 17:04:04.257871353 +0000 UTC m=+0.128259671 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.247 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.717 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.746 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.747 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.747 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.747 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.841 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.918 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:05 compute-0 nova_compute[185389]: 2026-01-26 17:04:05.921 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.022 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.023 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.087 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.089 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.150 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.159 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.227 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.228 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.288 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.289 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.356 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.357 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.418 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.828 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.830 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4850MB free_disk=72.39701080322266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.830 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.830 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.921 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.921 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.922 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.922 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:04:06 compute-0 nova_compute[185389]: 2026-01-26 17:04:06.988 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:04:07 compute-0 nova_compute[185389]: 2026-01-26 17:04:07.002 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:04:07 compute-0 nova_compute[185389]: 2026-01-26 17:04:07.004 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:04:07 compute-0 nova_compute[185389]: 2026-01-26 17:04:07.004 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:07 compute-0 podman[248704]: 2026-01-26 17:04:07.199517813 +0000 UTC m=+0.071908577 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:04:09 compute-0 nova_compute[185389]: 2026-01-26 17:04:09.006 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:09 compute-0 nova_compute[185389]: 2026-01-26 17:04:09.032 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:09 compute-0 nova_compute[185389]: 2026-01-26 17:04:09.206 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:09 compute-0 podman[248729]: 2026-01-26 17:04:09.213025169 +0000 UTC m=+0.098843060 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 26 17:04:10 compute-0 nova_compute[185389]: 2026-01-26 17:04:10.250 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:12 compute-0 podman[248748]: 2026-01-26 17:04:12.19071524 +0000 UTC m=+0.078287951 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 17:04:12 compute-0 podman[248749]: 2026-01-26 17:04:12.240037932 +0000 UTC m=+0.119164644 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=kepler, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, version=9.4)
Jan 26 17:04:12 compute-0 podman[248747]: 2026-01-26 17:04:12.263291855 +0000 UTC m=+0.143699041 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:04:14 compute-0 nova_compute[185389]: 2026-01-26 17:04:14.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:15 compute-0 nova_compute[185389]: 2026-01-26 17:04:15.253 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:19 compute-0 nova_compute[185389]: 2026-01-26 17:04:19.214 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:20 compute-0 nova_compute[185389]: 2026-01-26 17:04:20.256 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:24 compute-0 nova_compute[185389]: 2026-01-26 17:04:24.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:25 compute-0 nova_compute[185389]: 2026-01-26 17:04:25.258 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:29 compute-0 nova_compute[185389]: 2026-01-26 17:04:29.223 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:29 compute-0 podman[201244]: time="2026-01-26T17:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:04:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:04:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 26 17:04:30 compute-0 nova_compute[185389]: 2026-01-26 17:04:30.261 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.348 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.350 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.359 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.362 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:04:31.364047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 openstack_network_exporter[204387]: ERROR   17:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:04:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:04:31 compute-0 openstack_network_exporter[204387]: ERROR   17:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:04:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.457 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.457 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.458 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.520 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:04:31.522781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.523 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.524 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.524 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:04:31.525236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.525 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.526 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.526 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.526 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.526 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:04:31.527580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.531 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.534 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:04:31.535187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.555 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 53360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.574 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 47740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:04:31.575785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.576 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:04:31.577282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.578 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:04:31.578417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:04:31.579855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:04:31.581210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:04:31.582330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.582 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.583 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:04:31.583598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.585 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.585 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:04:31.585048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:04:31.586542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.588 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:04:31.587814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.588 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.589 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:04:31.589199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.590 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:04:31.590599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:04:31.591853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.610 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.610 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.611 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.636 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.636 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.636 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:04:31.637690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.638 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.639 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.639 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:04:31.640157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.640 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:04:31.642359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.642 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.643 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:04:31.644623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:04:31.646073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:04:31.648451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.650 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.651 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:04:31.652 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:04:34 compute-0 nova_compute[185389]: 2026-01-26 17:04:34.227 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:35 compute-0 podman[248817]: 2026-01-26 17:04:35.18705847 +0000 UTC m=+0.064834905 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:04:35 compute-0 podman[248815]: 2026-01-26 17:04:35.217647112 +0000 UTC m=+0.104609518 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 26 17:04:35 compute-0 podman[248816]: 2026-01-26 17:04:35.234600783 +0000 UTC m=+0.112286496 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:04:35 compute-0 nova_compute[185389]: 2026-01-26 17:04:35.263 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:38 compute-0 podman[248874]: 2026-01-26 17:04:38.1923366 +0000 UTC m=+0.082318801 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:04:39 compute-0 nova_compute[185389]: 2026-01-26 17:04:39.229 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:40 compute-0 podman[248898]: 2026-01-26 17:04:40.253375578 +0000 UTC m=+0.130174142 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 17:04:40 compute-0 nova_compute[185389]: 2026-01-26 17:04:40.267 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:42 compute-0 sshd-session[248916]: Connection closed by 80.94.92.171 port 52978
Jan 26 17:04:43 compute-0 podman[248919]: 2026-01-26 17:04:43.204777174 +0000 UTC m=+0.080714717 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4)
Jan 26 17:04:43 compute-0 podman[248918]: 2026-01-26 17:04:43.2226489 +0000 UTC m=+0.099084357 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:04:43 compute-0 podman[248917]: 2026-01-26 17:04:43.246739596 +0000 UTC m=+0.133588036 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 17:04:44 compute-0 nova_compute[185389]: 2026-01-26 17:04:44.233 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:45 compute-0 nova_compute[185389]: 2026-01-26 17:04:45.272 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:49 compute-0 nova_compute[185389]: 2026-01-26 17:04:49.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:50 compute-0 nova_compute[185389]: 2026-01-26 17:04:50.276 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:53 compute-0 nova_compute[185389]: 2026-01-26 17:04:53.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:53 compute-0 nova_compute[185389]: 2026-01-26 17:04:53.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:04:54 compute-0 nova_compute[185389]: 2026-01-26 17:04:54.242 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:55 compute-0 nova_compute[185389]: 2026-01-26 17:04:55.278 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:58 compute-0 nova_compute[185389]: 2026-01-26 17:04:58.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:58 compute-0 nova_compute[185389]: 2026-01-26 17:04:58.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:04:58 compute-0 nova_compute[185389]: 2026-01-26 17:04:58.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:04:59 compute-0 nova_compute[185389]: 2026-01-26 17:04:59.247 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:59 compute-0 podman[201244]: time="2026-01-26T17:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:04:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:04:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 26 17:04:59 compute-0 nova_compute[185389]: 2026-01-26 17:04:59.935 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:04:59 compute-0 nova_compute[185389]: 2026-01-26 17:04:59.936 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:04:59 compute-0 nova_compute[185389]: 2026-01-26 17:04:59.937 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:04:59 compute-0 nova_compute[185389]: 2026-01-26 17:04:59.938 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:05:00 compute-0 nova_compute[185389]: 2026-01-26 17:05:00.283 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:01 compute-0 openstack_network_exporter[204387]: ERROR   17:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:05:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:05:01 compute-0 openstack_network_exporter[204387]: ERROR   17:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:05:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:05:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:05:01.752 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:05:01.754 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:05:01.754 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.165 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.191 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.192 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.193 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.194 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.195 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:03 compute-0 nova_compute[185389]: 2026-01-26 17:05:03.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:04 compute-0 nova_compute[185389]: 2026-01-26 17:05:04.251 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:05 compute-0 nova_compute[185389]: 2026-01-26 17:05:05.286 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:06 compute-0 podman[248983]: 2026-01-26 17:05:06.207774476 +0000 UTC m=+0.091923682 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:05:06 compute-0 podman[248985]: 2026-01-26 17:05:06.215573589 +0000 UTC m=+0.081517289 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:05:06 compute-0 podman[248984]: 2026-01-26 17:05:06.216859803 +0000 UTC m=+0.091625823 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.759 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.847 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.914 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.915 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.978 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:06 compute-0 nova_compute[185389]: 2026-01-26 17:05:06.981 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.059 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.060 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.124 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.139 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.239 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.241 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.308 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.309 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.375 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.376 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.445 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.865 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.866 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4847MB free_disk=72.39706802368164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.867 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.867 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.948 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.949 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.949 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:05:07 compute-0 nova_compute[185389]: 2026-01-26 17:05:07.950 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:05:08 compute-0 nova_compute[185389]: 2026-01-26 17:05:08.012 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:05:08 compute-0 nova_compute[185389]: 2026-01-26 17:05:08.031 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:05:08 compute-0 nova_compute[185389]: 2026-01-26 17:05:08.033 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:05:08 compute-0 nova_compute[185389]: 2026-01-26 17:05:08.033 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:09 compute-0 podman[249069]: 2026-01-26 17:05:09.189198668 +0000 UTC m=+0.067401245 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:05:09 compute-0 nova_compute[185389]: 2026-01-26 17:05:09.256 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:10 compute-0 nova_compute[185389]: 2026-01-26 17:05:10.288 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:11 compute-0 nova_compute[185389]: 2026-01-26 17:05:11.035 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:11 compute-0 podman[249092]: 2026-01-26 17:05:11.184755357 +0000 UTC m=+0.071707723 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:05:14 compute-0 podman[249113]: 2026-01-26 17:05:14.206009883 +0000 UTC m=+0.079996057 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30)
Jan 26 17:05:14 compute-0 podman[249111]: 2026-01-26 17:05:14.238485567 +0000 UTC m=+0.116059179 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:05:14 compute-0 podman[249112]: 2026-01-26 17:05:14.258583554 +0000 UTC m=+0.130035319 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:05:14 compute-0 nova_compute[185389]: 2026-01-26 17:05:14.258 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:15 compute-0 nova_compute[185389]: 2026-01-26 17:05:15.290 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:19 compute-0 nova_compute[185389]: 2026-01-26 17:05:19.262 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:20 compute-0 nova_compute[185389]: 2026-01-26 17:05:20.293 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:24 compute-0 nova_compute[185389]: 2026-01-26 17:05:24.267 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:25 compute-0 nova_compute[185389]: 2026-01-26 17:05:25.295 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:29 compute-0 nova_compute[185389]: 2026-01-26 17:05:29.271 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:29 compute-0 podman[201244]: time="2026-01-26T17:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:05:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:05:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 26 17:05:30 compute-0 nova_compute[185389]: 2026-01-26 17:05:30.298 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:31 compute-0 openstack_network_exporter[204387]: ERROR   17:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:05:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:05:31 compute-0 openstack_network_exporter[204387]: ERROR   17:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:05:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:05:34 compute-0 nova_compute[185389]: 2026-01-26 17:05:34.274 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:35 compute-0 nova_compute[185389]: 2026-01-26 17:05:35.300 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:37 compute-0 podman[249178]: 2026-01-26 17:05:37.193683989 +0000 UTC m=+0.067955390 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:05:37 compute-0 podman[249177]: 2026-01-26 17:05:37.19885956 +0000 UTC m=+0.076111112 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0)
Jan 26 17:05:37 compute-0 podman[249176]: 2026-01-26 17:05:37.198810688 +0000 UTC m=+0.079943006 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 17:05:39 compute-0 nova_compute[185389]: 2026-01-26 17:05:39.279 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:40 compute-0 podman[249237]: 2026-01-26 17:05:40.204873205 +0000 UTC m=+0.079425352 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:05:40 compute-0 nova_compute[185389]: 2026-01-26 17:05:40.303 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:42 compute-0 podman[249261]: 2026-01-26 17:05:42.197509773 +0000 UTC m=+0.077441437 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:05:44 compute-0 nova_compute[185389]: 2026-01-26 17:05:44.283 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:44 compute-0 podman[249281]: 2026-01-26 17:05:44.779723835 +0000 UTC m=+0.085469966 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:05:44 compute-0 podman[249282]: 2026-01-26 17:05:44.784465664 +0000 UTC m=+0.085775885 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, version=9.4, io.openshift.expose-services=, distribution-scope=public, name=ubi9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Jan 26 17:05:44 compute-0 podman[249280]: 2026-01-26 17:05:44.849901574 +0000 UTC m=+0.159329926 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:05:45 compute-0 nova_compute[185389]: 2026-01-26 17:05:45.305 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:49 compute-0 nova_compute[185389]: 2026-01-26 17:05:49.287 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:50 compute-0 nova_compute[185389]: 2026-01-26 17:05:50.307 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:54 compute-0 nova_compute[185389]: 2026-01-26 17:05:54.293 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:54 compute-0 nova_compute[185389]: 2026-01-26 17:05:54.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:54 compute-0 nova_compute[185389]: 2026-01-26 17:05:54.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:05:55 compute-0 nova_compute[185389]: 2026-01-26 17:05:55.308 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:58 compute-0 nova_compute[185389]: 2026-01-26 17:05:58.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:58 compute-0 nova_compute[185389]: 2026-01-26 17:05:58.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:05:59 compute-0 nova_compute[185389]: 2026-01-26 17:05:59.222 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:05:59 compute-0 nova_compute[185389]: 2026-01-26 17:05:59.223 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:05:59 compute-0 nova_compute[185389]: 2026-01-26 17:05:59.223 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:05:59 compute-0 nova_compute[185389]: 2026-01-26 17:05:59.297 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:59 compute-0 podman[201244]: time="2026-01-26T17:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:05:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:05:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.310 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.592 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.620 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.620 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.621 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.621 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:00 compute-0 nova_compute[185389]: 2026-01-26 17:06:00.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:01 compute-0 openstack_network_exporter[204387]: ERROR   17:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:06:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:06:01 compute-0 openstack_network_exporter[204387]: ERROR   17:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:06:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:06:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:06:01.754 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:06:01.754 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:06:01.755 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:04 compute-0 nova_compute[185389]: 2026-01-26 17:06:04.301 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:04 compute-0 nova_compute[185389]: 2026-01-26 17:06:04.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:05 compute-0 nova_compute[185389]: 2026-01-26 17:06:05.312 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.744 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.746 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.841 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.905 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.906 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.966 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:06 compute-0 nova_compute[185389]: 2026-01-26 17:06:06.967 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.032 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.033 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.108 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.116 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.181 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.182 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.253 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.255 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.337 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.339 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.413 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.791 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.793 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=72.3968276977539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.793 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.794 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.895 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.896 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.969 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.989 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.991 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:06:07 compute-0 nova_compute[185389]: 2026-01-26 17:06:07.991 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:08 compute-0 podman[249366]: 2026-01-26 17:06:08.23446092 +0000 UTC m=+0.097352151 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:06:08 compute-0 podman[249365]: 2026-01-26 17:06:08.252277084 +0000 UTC m=+0.120660904 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Jan 26 17:06:08 compute-0 podman[249367]: 2026-01-26 17:06:08.262479042 +0000 UTC m=+0.078175038 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:06:09 compute-0 nova_compute[185389]: 2026-01-26 17:06:09.305 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:10 compute-0 nova_compute[185389]: 2026-01-26 17:06:10.313 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:11 compute-0 podman[249426]: 2026-01-26 17:06:11.249036687 +0000 UTC m=+0.130654916 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:06:12 compute-0 nova_compute[185389]: 2026-01-26 17:06:12.993 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:13 compute-0 podman[249449]: 2026-01-26 17:06:13.209013287 +0000 UTC m=+0.073851670 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:06:13 compute-0 nova_compute[185389]: 2026-01-26 17:06:13.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:14 compute-0 nova_compute[185389]: 2026-01-26 17:06:14.309 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:15 compute-0 podman[249471]: 2026-01-26 17:06:15.224259914 +0000 UTC m=+0.085210990 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9)
Jan 26 17:06:15 compute-0 podman[249470]: 2026-01-26 17:06:15.233093315 +0000 UTC m=+0.104889866 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Jan 26 17:06:15 compute-0 podman[249469]: 2026-01-26 17:06:15.257316974 +0000 UTC m=+0.130957645 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:06:15 compute-0 nova_compute[185389]: 2026-01-26 17:06:15.316 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:19 compute-0 nova_compute[185389]: 2026-01-26 17:06:19.314 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:20 compute-0 nova_compute[185389]: 2026-01-26 17:06:20.318 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:24 compute-0 nova_compute[185389]: 2026-01-26 17:06:24.317 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:25 compute-0 nova_compute[185389]: 2026-01-26 17:06:25.320 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:29 compute-0 nova_compute[185389]: 2026-01-26 17:06:29.322 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:29 compute-0 podman[201244]: time="2026-01-26T17:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:06:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:06:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:06:30 compute-0 nova_compute[185389]: 2026-01-26 17:06:30.322 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.348 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.349 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.361 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.365 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:06:31.367413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 openstack_network_exporter[204387]: ERROR   17:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:06:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:06:31 compute-0 openstack_network_exporter[204387]: ERROR   17:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:06:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.452 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.453 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.453 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.552 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.553 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.554 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.556 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.557 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.557 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.558 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.558 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.559 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:06:31.556311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.561 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.562 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.562 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:06:31.561335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.564 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:06:31.566468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.582 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.584 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.585 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:06:31.586326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.623 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 54830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.666 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 49230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.668 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.668 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:06:31.668033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:06:31.669537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.670 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:06:31.670453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.672 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.672 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:06:31.671977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.676 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.677 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:06:31.673186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:06:31.674006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:06:31.675306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:06:31.676508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:06:31.677556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:06:31.678858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:06:31.680097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:06:31.681288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:06:31.682869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.724 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.725 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.725 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.757 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.757 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.758 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.759 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.760 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:06:31.760362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.761 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.761 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.762 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.762 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.763 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.764 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.765 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.765 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:06:31.764942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.766 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.766 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.766 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.767 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.767 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.769 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.770 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.771 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.771 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:06:31.769578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.772 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.773 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.774 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:06:31.774474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.775 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.775 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.777 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.777 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:06:31.776921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.778 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.778 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.779 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.779 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.781 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:06:31.781658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.782 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.782 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.783 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.783 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.783 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.784 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.787 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.788 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:06:31.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:06:34 compute-0 nova_compute[185389]: 2026-01-26 17:06:34.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:35 compute-0 nova_compute[185389]: 2026-01-26 17:06:35.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:39 compute-0 podman[249534]: 2026-01-26 17:06:39.225232819 +0000 UTC m=+0.102344676 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 26 17:06:39 compute-0 podman[249536]: 2026-01-26 17:06:39.231419977 +0000 UTC m=+0.094106531 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:06:39 compute-0 podman[249535]: 2026-01-26 17:06:39.233843513 +0000 UTC m=+0.105646265 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 26 17:06:39 compute-0 nova_compute[185389]: 2026-01-26 17:06:39.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:40 compute-0 nova_compute[185389]: 2026-01-26 17:06:40.329 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:42 compute-0 podman[249597]: 2026-01-26 17:06:42.198487458 +0000 UTC m=+0.081741226 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:06:44 compute-0 podman[249620]: 2026-01-26 17:06:44.334366253 +0000 UTC m=+0.208178215 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 17:06:44 compute-0 nova_compute[185389]: 2026-01-26 17:06:44.336 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:45 compute-0 nova_compute[185389]: 2026-01-26 17:06:45.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:46 compute-0 podman[249638]: 2026-01-26 17:06:46.230339052 +0000 UTC m=+0.097642638 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 17:06:46 compute-0 podman[249639]: 2026-01-26 17:06:46.238254877 +0000 UTC m=+0.099496478 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Jan 26 17:06:46 compute-0 podman[249637]: 2026-01-26 17:06:46.272747426 +0000 UTC m=+0.141355177 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 17:06:49 compute-0 nova_compute[185389]: 2026-01-26 17:06:49.341 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:50 compute-0 nova_compute[185389]: 2026-01-26 17:06:50.332 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:54 compute-0 nova_compute[185389]: 2026-01-26 17:06:54.342 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:55 compute-0 nova_compute[185389]: 2026-01-26 17:06:55.335 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:55 compute-0 nova_compute[185389]: 2026-01-26 17:06:55.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:55 compute-0 nova_compute[185389]: 2026-01-26 17:06:55.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:06:59 compute-0 nova_compute[185389]: 2026-01-26 17:06:59.345 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:59 compute-0 podman[201244]: time="2026-01-26T17:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:06:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:06:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 26 17:07:00 compute-0 nova_compute[185389]: 2026-01-26 17:07:00.337 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:00 compute-0 nova_compute[185389]: 2026-01-26 17:07:00.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:00 compute-0 nova_compute[185389]: 2026-01-26 17:07:00.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:07:00 compute-0 nova_compute[185389]: 2026-01-26 17:07:00.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:07:01 compute-0 openstack_network_exporter[204387]: ERROR   17:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:07:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:07:01 compute-0 openstack_network_exporter[204387]: ERROR   17:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:07:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:07:01 compute-0 nova_compute[185389]: 2026-01-26 17:07:01.538 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:07:01 compute-0 nova_compute[185389]: 2026-01-26 17:07:01.539 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:07:01 compute-0 nova_compute[185389]: 2026-01-26 17:07:01.540 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:07:01 compute-0 nova_compute[185389]: 2026-01-26 17:07:01.540 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:07:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:07:01.755 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:07:01.756 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:07:01.757 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.536 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.558 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.559 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.560 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.561 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:03 compute-0 nova_compute[185389]: 2026-01-26 17:07:03.562 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:04 compute-0 nova_compute[185389]: 2026-01-26 17:07:04.350 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:04 compute-0 nova_compute[185389]: 2026-01-26 17:07:04.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:05 compute-0 nova_compute[185389]: 2026-01-26 17:07:05.341 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:08 compute-0 nova_compute[185389]: 2026-01-26 17:07:08.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:08 compute-0 nova_compute[185389]: 2026-01-26 17:07:08.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.033 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.034 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.034 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.035 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.154 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.228 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.230 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.304 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.306 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.354 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.413 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.414 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.495 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.507 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.593 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.595 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.668 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.670 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.739 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.741 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:09 compute-0 nova_compute[185389]: 2026-01-26 17:07:09.808 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:10 compute-0 podman[249726]: 2026-01-26 17:07:10.22258967 +0000 UTC m=+0.097758661 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4)
Jan 26 17:07:10 compute-0 podman[249725]: 2026-01-26 17:07:10.227297398 +0000 UTC m=+0.103678832 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Jan 26 17:07:10 compute-0 podman[249727]: 2026-01-26 17:07:10.245450532 +0000 UTC m=+0.108945115 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.273 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.275 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.39680480957031GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.275 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.276 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.344 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.360 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.361 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.361 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.361 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.424 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.440 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.442 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:07:10 compute-0 nova_compute[185389]: 2026-01-26 17:07:10.442 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:13 compute-0 podman[249785]: 2026-01-26 17:07:13.206926522 +0000 UTC m=+0.090432973 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:07:14 compute-0 nova_compute[185389]: 2026-01-26 17:07:14.359 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:14 compute-0 nova_compute[185389]: 2026-01-26 17:07:14.443 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:14 compute-0 podman[249808]: 2026-01-26 17:07:14.777051812 +0000 UTC m=+0.101523363 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 26 17:07:15 compute-0 nova_compute[185389]: 2026-01-26 17:07:15.346 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:17 compute-0 podman[249828]: 2026-01-26 17:07:17.225833851 +0000 UTC m=+0.104846623 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 26 17:07:17 compute-0 podman[249829]: 2026-01-26 17:07:17.231714021 +0000 UTC m=+0.108616255 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 26 17:07:17 compute-0 podman[249827]: 2026-01-26 17:07:17.249877126 +0000 UTC m=+0.135311323 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 17:07:19 compute-0 nova_compute[185389]: 2026-01-26 17:07:19.361 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:20 compute-0 nova_compute[185389]: 2026-01-26 17:07:20.349 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.365 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.720 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.721 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.722 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.722 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.723 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.723 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.762 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.783 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.784 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Image id 718285d9-0264-40f4-9fb3-d2faff180284 yields fingerprint 7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.784 185393 INFO nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] image 718285d9-0264-40f4-9fb3-d2faff180284 at (/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3): checking
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.785 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] image 718285d9-0264-40f4-9fb3-d2faff180284 at (/var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.789 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.790 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] 60ba224f-9c5d-4eb4-b501-66d7339832b9 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.790 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] 60ba224f-9c5d-4eb4-b501-66d7339832b9 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.791 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.861 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.862 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 is backed by 7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.863 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] a2578f61-3f19-40f4-a32f-97cf22569550 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.864 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] a2578f61-3f19-40f4-a32f-97cf22569550 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.864 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.971 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.973 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 is backed by 7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.975 185393 INFO nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Active base files: /var/lib/nova/instances/_base/7a5e3188ac4de3f0ad8eeb8c9bbd6ccd05a86bb3
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.975 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.976 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 26 17:07:24 compute-0 nova_compute[185389]: 2026-01-26 17:07:24.976 185393 DEBUG nova.virt.libvirt.imagecache [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 26 17:07:25 compute-0 nova_compute[185389]: 2026-01-26 17:07:25.353 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:29 compute-0 nova_compute[185389]: 2026-01-26 17:07:29.370 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:29 compute-0 podman[201244]: time="2026-01-26T17:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:07:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:07:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:07:30 compute-0 nova_compute[185389]: 2026-01-26 17:07:30.356 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:31 compute-0 openstack_network_exporter[204387]: ERROR   17:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:07:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:07:31 compute-0 openstack_network_exporter[204387]: ERROR   17:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:07:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:07:34 compute-0 nova_compute[185389]: 2026-01-26 17:07:34.373 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:35 compute-0 nova_compute[185389]: 2026-01-26 17:07:35.359 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:39 compute-0 nova_compute[185389]: 2026-01-26 17:07:39.376 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:40 compute-0 nova_compute[185389]: 2026-01-26 17:07:40.362 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:41 compute-0 podman[249898]: 2026-01-26 17:07:41.223761301 +0000 UTC m=+0.081969852 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:07:41 compute-0 podman[249897]: 2026-01-26 17:07:41.236899528 +0000 UTC m=+0.101153943 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 26 17:07:41 compute-0 podman[249896]: 2026-01-26 17:07:41.262060953 +0000 UTC m=+0.130437590 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 26 17:07:44 compute-0 podman[249958]: 2026-01-26 17:07:44.203446916 +0000 UTC m=+0.094736279 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:07:44 compute-0 nova_compute[185389]: 2026-01-26 17:07:44.380 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:45 compute-0 podman[249981]: 2026-01-26 17:07:45.210059756 +0000 UTC m=+0.098150561 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:07:45 compute-0 nova_compute[185389]: 2026-01-26 17:07:45.364 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:48 compute-0 podman[250001]: 2026-01-26 17:07:48.239119714 +0000 UTC m=+0.091064579 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9)
Jan 26 17:07:48 compute-0 podman[250000]: 2026-01-26 17:07:48.255357326 +0000 UTC m=+0.108480313 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:07:48 compute-0 podman[249999]: 2026-01-26 17:07:48.261123833 +0000 UTC m=+0.119274047 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:49 compute-0 nova_compute[185389]: 2026-01-26 17:07:49.384 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:50 compute-0 nova_compute[185389]: 2026-01-26 17:07:50.370 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:54 compute-0 nova_compute[185389]: 2026-01-26 17:07:54.389 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:55 compute-0 nova_compute[185389]: 2026-01-26 17:07:55.374 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:57 compute-0 nova_compute[185389]: 2026-01-26 17:07:57.979 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:57 compute-0 nova_compute[185389]: 2026-01-26 17:07:57.981 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:07:59 compute-0 nova_compute[185389]: 2026-01-26 17:07:59.391 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:59 compute-0 podman[201244]: time="2026-01-26T17:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:07:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:07:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 26 17:08:00 compute-0 nova_compute[185389]: 2026-01-26 17:08:00.377 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:00 compute-0 nova_compute[185389]: 2026-01-26 17:08:00.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:00 compute-0 nova_compute[185389]: 2026-01-26 17:08:00.723 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:01 compute-0 openstack_network_exporter[204387]: ERROR   17:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:08:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:08:01 compute-0 openstack_network_exporter[204387]: ERROR   17:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:08:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:08:01 compute-0 nova_compute[185389]: 2026-01-26 17:08:01.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:01 compute-0 nova_compute[185389]: 2026-01-26 17:08:01.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:08:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:08:01.756 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:08:01.758 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:08:01.759 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:02 compute-0 nova_compute[185389]: 2026-01-26 17:08:02.571 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:08:02 compute-0 nova_compute[185389]: 2026-01-26 17:08:02.571 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:08:02 compute-0 nova_compute[185389]: 2026-01-26 17:08:02.572 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.396 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.939 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.954 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.955 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.956 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.956 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.957 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:08:04 compute-0 nova_compute[185389]: 2026-01-26 17:08:04.972 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:08:05 compute-0 nova_compute[185389]: 2026-01-26 17:08:05.381 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:06 compute-0 nova_compute[185389]: 2026-01-26 17:08:06.736 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:08 compute-0 nova_compute[185389]: 2026-01-26 17:08:08.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.397 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.758 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.911 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.977 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:09 compute-0 nova_compute[185389]: 2026-01-26 17:08:09.978 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.040 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.041 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.108 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.109 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.184 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.192 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.278 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.279 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.347 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.348 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.382 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.412 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.413 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.483 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.825 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.826 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4839MB free_disk=72.39682388305664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.826 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:10 compute-0 nova_compute[185389]: 2026-01-26 17:08:10.826 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.052 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.053 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.053 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.054 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.139 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.226 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.227 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.254 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.292 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.363 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.386 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.389 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.390 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.391 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:11 compute-0 nova_compute[185389]: 2026-01-26 17:08:11.391 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:08:12 compute-0 podman[250091]: 2026-01-26 17:08:12.186068029 +0000 UTC m=+0.068661999 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:08:12 compute-0 podman[250089]: 2026-01-26 17:08:12.20007014 +0000 UTC m=+0.092902779 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 26 17:08:12 compute-0 podman[250090]: 2026-01-26 17:08:12.201477778 +0000 UTC m=+0.089479345 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 26 17:08:14 compute-0 nova_compute[185389]: 2026-01-26 17:08:14.401 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:14 compute-0 podman[250150]: 2026-01-26 17:08:14.75172317 +0000 UTC m=+0.068686720 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:08:15 compute-0 nova_compute[185389]: 2026-01-26 17:08:15.385 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:16 compute-0 podman[250174]: 2026-01-26 17:08:16.182857461 +0000 UTC m=+0.073064629 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.405 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.608 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.633 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.633 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid a2578f61-3f19-40f4-a32f-97cf22569550 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.634 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.634 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.635 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.635 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.671 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.672 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:16 compute-0 nova_compute[185389]: 2026-01-26 17:08:16.740 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:19 compute-0 podman[250194]: 2026-01-26 17:08:19.190689392 +0000 UTC m=+0.071962549 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 17:08:19 compute-0 podman[250195]: 2026-01-26 17:08:19.203196382 +0000 UTC m=+0.081380995 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 17:08:19 compute-0 podman[250193]: 2026-01-26 17:08:19.221416919 +0000 UTC m=+0.107065165 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:08:19 compute-0 nova_compute[185389]: 2026-01-26 17:08:19.403 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:20 compute-0 nova_compute[185389]: 2026-01-26 17:08:20.386 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:24 compute-0 nova_compute[185389]: 2026-01-26 17:08:24.406 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:25 compute-0 nova_compute[185389]: 2026-01-26 17:08:25.389 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:25 compute-0 nova_compute[185389]: 2026-01-26 17:08:25.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:29 compute-0 nova_compute[185389]: 2026-01-26 17:08:29.410 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:29 compute-0 podman[201244]: time="2026-01-26T17:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:08:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:08:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:08:30 compute-0 nova_compute[185389]: 2026-01-26 17:08:30.391 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.350 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.351 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.359 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.362 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:08:31.363637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 openstack_network_exporter[204387]: ERROR   17:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:08:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:08:31 compute-0 openstack_network_exporter[204387]: ERROR   17:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:08:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.441 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.442 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.442 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.503 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.504 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.504 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.506 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.506 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.506 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:08:31.505810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.507 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.507 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.507 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.508 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.509 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.509 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.509 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.509 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:08:31.508547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:08:31.510898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.514 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.519 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:08:31.520859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.544 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 56240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.566 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 50570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.567 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.568 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.568 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:08:31.568147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:08:31.570019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.571 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:08:31.571338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.572 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.573 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:08:31.572718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:08:31.574398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.577 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:08:31.575634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:08:31.577065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:08:31.579206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.579 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.581 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.581 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:08:31.580899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.582 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.583 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:08:31.582620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:08:31.584250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.584 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.586 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:08:31.585769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.586 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:08:31.587294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.611 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.612 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.612 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:08:31.647891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.648 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.650 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.651 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.651 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.651 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:08:31.650247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.652 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.653 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:08:31.652482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.653 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.653 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.654 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:08:31.655487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.656 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:08:31.657466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.657 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.658 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.658 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.658 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.659 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.659 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.660 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.661 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.661 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.662 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:08:31.660802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.662 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.663 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.663 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:08:31.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:08:34 compute-0 nova_compute[185389]: 2026-01-26 17:08:34.414 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:35 compute-0 nova_compute[185389]: 2026-01-26 17:08:35.394 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:39 compute-0 nova_compute[185389]: 2026-01-26 17:08:39.418 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:40 compute-0 nova_compute[185389]: 2026-01-26 17:08:40.395 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:43 compute-0 sshd-session[250255]: Connection closed by 206.168.34.223 port 7898 [preauth]
Jan 26 17:08:43 compute-0 podman[250258]: 2026-01-26 17:08:43.220924917 +0000 UTC m=+0.090925306 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public, vcs-type=git, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:08:43 compute-0 podman[250260]: 2026-01-26 17:08:43.237762085 +0000 UTC m=+0.107762064 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:08:43 compute-0 podman[250259]: 2026-01-26 17:08:43.245284289 +0000 UTC m=+0.114703462 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:08:44 compute-0 nova_compute[185389]: 2026-01-26 17:08:44.421 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:45 compute-0 podman[250316]: 2026-01-26 17:08:45.20673052 +0000 UTC m=+0.095241432 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:08:45 compute-0 nova_compute[185389]: 2026-01-26 17:08:45.398 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:47 compute-0 podman[250339]: 2026-01-26 17:08:47.224640217 +0000 UTC m=+0.105837291 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 17:08:49 compute-0 nova_compute[185389]: 2026-01-26 17:08:49.425 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:50 compute-0 podman[250360]: 2026-01-26 17:08:50.20529699 +0000 UTC m=+0.079641859 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:08:50 compute-0 podman[250359]: 2026-01-26 17:08:50.216280819 +0000 UTC m=+0.099222402 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:08:50 compute-0 podman[250358]: 2026-01-26 17:08:50.224172243 +0000 UTC m=+0.112665287 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 17:08:50 compute-0 nova_compute[185389]: 2026-01-26 17:08:50.399 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:54 compute-0 nova_compute[185389]: 2026-01-26 17:08:54.428 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:55 compute-0 nova_compute[185389]: 2026-01-26 17:08:55.401 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:58 compute-0 nova_compute[185389]: 2026-01-26 17:08:58.741 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:58 compute-0 nova_compute[185389]: 2026-01-26 17:08:58.741 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:08:59 compute-0 nova_compute[185389]: 2026-01-26 17:08:59.434 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:59 compute-0 podman[201244]: time="2026-01-26T17:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:08:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:08:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 26 17:09:00 compute-0 nova_compute[185389]: 2026-01-26 17:09:00.403 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:01 compute-0 openstack_network_exporter[204387]: ERROR   17:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:09:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:09:01 compute-0 openstack_network_exporter[204387]: ERROR   17:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:09:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:09:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:09:01.758 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:09:01.759 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:09:01.760 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.924 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.924 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.925 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:09:01 compute-0 nova_compute[185389]: 2026-01-26 17:09:01.925 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.711 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.728 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.729 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.731 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.732 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:03 compute-0 nova_compute[185389]: 2026-01-26 17:09:03.732 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:04 compute-0 nova_compute[185389]: 2026-01-26 17:09:04.438 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:05 compute-0 nova_compute[185389]: 2026-01-26 17:09:05.405 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:07 compute-0 nova_compute[185389]: 2026-01-26 17:09:07.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.442 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.803 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.804 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.804 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.805 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:09:09 compute-0 nova_compute[185389]: 2026-01-26 17:09:09.953 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.038 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.040 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.110 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.111 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.177 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.178 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.237 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.245 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.314 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.315 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.380 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.381 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.409 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.456 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.456 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.529 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.944 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.945 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.39682388305664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.946 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:10 compute-0 nova_compute[185389]: 2026-01-26 17:09:10.946 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.045 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.046 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.046 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.046 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.121 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.139 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.141 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:09:11 compute-0 nova_compute[185389]: 2026-01-26 17:09:11.141 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:12 compute-0 nova_compute[185389]: 2026-01-26 17:09:12.137 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:14 compute-0 podman[250443]: 2026-01-26 17:09:14.218377877 +0000 UTC m=+0.093725842 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 26 17:09:14 compute-0 podman[250444]: 2026-01-26 17:09:14.257677026 +0000 UTC m=+0.114459656 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:09:14 compute-0 podman[250442]: 2026-01-26 17:09:14.273472225 +0000 UTC m=+0.140923975 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:09:14 compute-0 nova_compute[185389]: 2026-01-26 17:09:14.449 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:15 compute-0 nova_compute[185389]: 2026-01-26 17:09:15.412 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:15 compute-0 nova_compute[185389]: 2026-01-26 17:09:15.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:16 compute-0 podman[250507]: 2026-01-26 17:09:16.185898262 +0000 UTC m=+0.068676090 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:09:18 compute-0 podman[250532]: 2026-01-26 17:09:18.20713682 +0000 UTC m=+0.092675402 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:09:18 compute-0 sshd-session[250530]: Invalid user sol from 80.94.92.171 port 56160
Jan 26 17:09:18 compute-0 sshd-session[250530]: Connection closed by invalid user sol 80.94.92.171 port 56160 [preauth]
Jan 26 17:09:19 compute-0 nova_compute[185389]: 2026-01-26 17:09:19.452 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:20 compute-0 nova_compute[185389]: 2026-01-26 17:09:20.414 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:21 compute-0 podman[250553]: 2026-01-26 17:09:21.21626703 +0000 UTC m=+0.083731149 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release-0.7.12=, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 17:09:21 compute-0 podman[250551]: 2026-01-26 17:09:21.238462694 +0000 UTC m=+0.119976625 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:09:21 compute-0 podman[250552]: 2026-01-26 17:09:21.239940114 +0000 UTC m=+0.111373272 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 26 17:09:24 compute-0 nova_compute[185389]: 2026-01-26 17:09:24.454 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:25 compute-0 nova_compute[185389]: 2026-01-26 17:09:25.418 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:29 compute-0 nova_compute[185389]: 2026-01-26 17:09:29.458 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:29 compute-0 podman[201244]: time="2026-01-26T17:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:09:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:09:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 26 17:09:30 compute-0 nova_compute[185389]: 2026-01-26 17:09:30.423 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:31 compute-0 openstack_network_exporter[204387]: ERROR   17:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:09:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:09:31 compute-0 openstack_network_exporter[204387]: ERROR   17:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:09:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:09:34 compute-0 nova_compute[185389]: 2026-01-26 17:09:34.463 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:35 compute-0 nova_compute[185389]: 2026-01-26 17:09:35.426 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:39 compute-0 nova_compute[185389]: 2026-01-26 17:09:39.468 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:40 compute-0 nova_compute[185389]: 2026-01-26 17:09:40.429 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:44 compute-0 nova_compute[185389]: 2026-01-26 17:09:44.473 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:44 compute-0 podman[250617]: 2026-01-26 17:09:44.784015025 +0000 UTC m=+0.075127594 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:09:44 compute-0 podman[250615]: 2026-01-26 17:09:44.794619514 +0000 UTC m=+0.086895905 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:09:44 compute-0 podman[250616]: 2026-01-26 17:09:44.820249862 +0000 UTC m=+0.112524623 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_managed=true)
Jan 26 17:09:45 compute-0 nova_compute[185389]: 2026-01-26 17:09:45.432 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:47 compute-0 podman[250680]: 2026-01-26 17:09:47.19622605 +0000 UTC m=+0.078622831 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:09:49 compute-0 podman[250705]: 2026-01-26 17:09:49.233880511 +0000 UTC m=+0.110742914 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:09:49 compute-0 nova_compute[185389]: 2026-01-26 17:09:49.477 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:50 compute-0 nova_compute[185389]: 2026-01-26 17:09:50.437 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:52 compute-0 podman[250727]: 2026-01-26 17:09:52.236680717 +0000 UTC m=+0.104137244 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, distribution-scope=public, config_id=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 17:09:52 compute-0 podman[250726]: 2026-01-26 17:09:52.241594321 +0000 UTC m=+0.107019704 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:09:52 compute-0 podman[250725]: 2026-01-26 17:09:52.266057846 +0000 UTC m=+0.134203653 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:09:54 compute-0 nova_compute[185389]: 2026-01-26 17:09:54.481 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:55 compute-0 nova_compute[185389]: 2026-01-26 17:09:55.441 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:59 compute-0 nova_compute[185389]: 2026-01-26 17:09:59.483 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:59 compute-0 podman[201244]: time="2026-01-26T17:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:09:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:09:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 26 17:10:00 compute-0 nova_compute[185389]: 2026-01-26 17:10:00.444 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:00 compute-0 nova_compute[185389]: 2026-01-26 17:10:00.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:00 compute-0 nova_compute[185389]: 2026-01-26 17:10:00.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:10:01 compute-0 openstack_network_exporter[204387]: ERROR   17:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:10:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:10:01 compute-0 openstack_network_exporter[204387]: ERROR   17:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:10:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:10:01 compute-0 nova_compute[185389]: 2026-01-26 17:10:01.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:01 compute-0 nova_compute[185389]: 2026-01-26 17:10:01.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:10:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:10:01.760 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:10:01.761 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:10:01.762 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:02 compute-0 nova_compute[185389]: 2026-01-26 17:10:02.745 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:10:02 compute-0 nova_compute[185389]: 2026-01-26 17:10:02.746 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:10:02 compute-0 nova_compute[185389]: 2026-01-26 17:10:02.746 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.486 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.533 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.556 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.556 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.557 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.557 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:04 compute-0 nova_compute[185389]: 2026-01-26 17:10:04.557 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:05 compute-0 nova_compute[185389]: 2026-01-26 17:10:05.447 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:07 compute-0 nova_compute[185389]: 2026-01-26 17:10:07.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:09 compute-0 nova_compute[185389]: 2026-01-26 17:10:09.487 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.450 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.751 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.753 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.753 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.754 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.856 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.932 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:10 compute-0 nova_compute[185389]: 2026-01-26 17:10:10.934 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.003 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.006 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.091 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.093 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.162 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.171 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.245 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.248 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.321 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.323 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.394 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.396 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.461 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.828 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.829 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=72.39682006835938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.829 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.830 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.942 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.943 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.943 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:10:11 compute-0 nova_compute[185389]: 2026-01-26 17:10:11.943 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:10:12 compute-0 nova_compute[185389]: 2026-01-26 17:10:12.026 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:10:12 compute-0 nova_compute[185389]: 2026-01-26 17:10:12.200 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:10:12 compute-0 nova_compute[185389]: 2026-01-26 17:10:12.203 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:10:12 compute-0 nova_compute[185389]: 2026-01-26 17:10:12.203 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:14 compute-0 nova_compute[185389]: 2026-01-26 17:10:14.199 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:14 compute-0 nova_compute[185389]: 2026-01-26 17:10:14.490 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:15 compute-0 podman[250809]: 2026-01-26 17:10:15.192301887 +0000 UTC m=+0.080787029 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Jan 26 17:10:15 compute-0 podman[250811]: 2026-01-26 17:10:15.202176367 +0000 UTC m=+0.082422375 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:10:15 compute-0 podman[250810]: 2026-01-26 17:10:15.219809916 +0000 UTC m=+0.102335485 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS)
Jan 26 17:10:15 compute-0 nova_compute[185389]: 2026-01-26 17:10:15.454 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:17 compute-0 nova_compute[185389]: 2026-01-26 17:10:17.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:17 compute-0 nova_compute[185389]: 2026-01-26 17:10:17.762 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:18 compute-0 podman[250869]: 2026-01-26 17:10:18.230576647 +0000 UTC m=+0.109450549 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:10:19 compute-0 nova_compute[185389]: 2026-01-26 17:10:19.494 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:20 compute-0 podman[250892]: 2026-01-26 17:10:20.213076638 +0000 UTC m=+0.099766675 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 17:10:20 compute-0 nova_compute[185389]: 2026-01-26 17:10:20.455 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:23 compute-0 podman[250913]: 2026-01-26 17:10:23.244263658 +0000 UTC m=+0.110781067 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 26 17:10:23 compute-0 podman[250919]: 2026-01-26 17:10:23.262643888 +0000 UTC m=+0.106253645 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 17:10:23 compute-0 podman[250912]: 2026-01-26 17:10:23.289708073 +0000 UTC m=+0.162044959 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 17:10:24 compute-0 nova_compute[185389]: 2026-01-26 17:10:24.497 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:25 compute-0 nova_compute[185389]: 2026-01-26 17:10:25.458 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:29 compute-0 nova_compute[185389]: 2026-01-26 17:10:29.500 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:29 compute-0 podman[201244]: time="2026-01-26T17:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:10:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:10:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:10:30 compute-0 nova_compute[185389]: 2026-01-26 17:10:30.462 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.350 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.350 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.358 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.361 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:10:31.362392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 openstack_network_exporter[204387]: ERROR   17:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:10:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:10:31 compute-0 openstack_network_exporter[204387]: ERROR   17:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:10:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.434 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.435 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.435 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.496 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.496 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.496 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.498 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.498 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:10:31.497878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.498 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.498 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.499 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.499 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.499 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.500 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.501 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.501 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:10:31.500167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:10:31.502672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.506 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.510 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.511 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:10:31.511317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.531 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 57560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.556 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 51960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.558 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.558 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:10:31.557770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:10:31.559748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.561 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.562 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.562 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:10:31.561710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:10:31.563507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.564 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.566 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.567 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:10:31.565200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.568 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.568 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.570 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.570 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.571 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.572 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:10:31.566385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:10:31.568147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:10:31.570050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.573 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:10:31.571622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:10:31.573526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.574 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.575 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:10:31.575301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.577 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.577 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:10:31.577052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:10:31.578687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.605 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.606 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.607 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.636 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:10:31.638687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.639 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.639 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.640 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.640 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.640 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.640 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.641 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.642 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:10:31.641792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.642 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.643 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.645 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:10:31.644734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.645 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.645 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.646 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:10:31.647515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.650 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.650 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.650 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:10:31.649194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.651 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.652 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.652 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:10:31.651606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.653 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.653 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.654 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:10:31.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:10:34 compute-0 nova_compute[185389]: 2026-01-26 17:10:34.503 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:35 compute-0 nova_compute[185389]: 2026-01-26 17:10:35.466 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:39 compute-0 nova_compute[185389]: 2026-01-26 17:10:39.508 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:40 compute-0 nova_compute[185389]: 2026-01-26 17:10:40.467 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:44 compute-0 nova_compute[185389]: 2026-01-26 17:10:44.511 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:45 compute-0 nova_compute[185389]: 2026-01-26 17:10:45.469 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:46 compute-0 podman[250976]: 2026-01-26 17:10:46.184178216 +0000 UTC m=+0.064275276 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:10:46 compute-0 podman[250975]: 2026-01-26 17:10:46.202821632 +0000 UTC m=+0.084140355 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:10:46 compute-0 podman[250974]: 2026-01-26 17:10:46.207630713 +0000 UTC m=+0.093535640 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter)
Jan 26 17:10:49 compute-0 podman[251036]: 2026-01-26 17:10:49.184729117 +0000 UTC m=+0.073407093 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:10:49 compute-0 nova_compute[185389]: 2026-01-26 17:10:49.517 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:50 compute-0 nova_compute[185389]: 2026-01-26 17:10:50.473 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:51 compute-0 podman[251058]: 2026-01-26 17:10:51.241123737 +0000 UTC m=+0.131457839 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:10:54 compute-0 podman[251079]: 2026-01-26 17:10:54.244776663 +0000 UTC m=+0.107923880 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Jan 26 17:10:54 compute-0 podman[251078]: 2026-01-26 17:10:54.249261145 +0000 UTC m=+0.112400932 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:10:54 compute-0 podman[251077]: 2026-01-26 17:10:54.292306803 +0000 UTC m=+0.155448670 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:10:54 compute-0 nova_compute[185389]: 2026-01-26 17:10:54.519 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:55 compute-0 nova_compute[185389]: 2026-01-26 17:10:55.475 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:56 compute-0 sshd-session[251138]: Accepted publickey for zuul from 38.102.83.145 port 51616 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 17:10:56 compute-0 systemd-logind[788]: New session 30 of user zuul.
Jan 26 17:10:56 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 26 17:10:56 compute-0 sshd-session[251138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 17:10:57 compute-0 sudo[251315]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucyblrodtehhebtnnrofgrhgypacjcxl ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769447456.7458072-60786-167085242892915/AnsiballZ_command.py'
Jan 26 17:10:57 compute-0 sudo[251315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:10:57 compute-0 python3[251317]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 17:10:57 compute-0 sudo[251315]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:59 compute-0 nova_compute[185389]: 2026-01-26 17:10:59.524 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:59 compute-0 podman[201244]: time="2026-01-26T17:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:10:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:10:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 26 17:11:00 compute-0 nova_compute[185389]: 2026-01-26 17:11:00.480 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:01 compute-0 openstack_network_exporter[204387]: ERROR   17:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:11:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:11:01 compute-0 openstack_network_exporter[204387]: ERROR   17:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:11:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:11:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:11:01.761 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:11:01.763 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:11:01.772 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:02 compute-0 nova_compute[185389]: 2026-01-26 17:11:02.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:02 compute-0 nova_compute[185389]: 2026-01-26 17:11:02.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:11:02 compute-0 nova_compute[185389]: 2026-01-26 17:11:02.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:11:03 compute-0 nova_compute[185389]: 2026-01-26 17:11:03.283 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:11:03 compute-0 nova_compute[185389]: 2026-01-26 17:11:03.284 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:11:03 compute-0 nova_compute[185389]: 2026-01-26 17:11:03.284 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:11:03 compute-0 nova_compute[185389]: 2026-01-26 17:11:03.285 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:11:04 compute-0 nova_compute[185389]: 2026-01-26 17:11:04.528 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:05 compute-0 nova_compute[185389]: 2026-01-26 17:11:05.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.273 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.292 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.293 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.294 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.295 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.295 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.296 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:06 compute-0 nova_compute[185389]: 2026-01-26 17:11:06.296 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:11:07 compute-0 nova_compute[185389]: 2026-01-26 17:11:07.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:09 compute-0 nova_compute[185389]: 2026-01-26 17:11:09.532 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.486 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.761 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.761 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.762 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.763 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:11:10 compute-0 nova_compute[185389]: 2026-01-26 17:11:10.981 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.063 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.065 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.142 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.143 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.210 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.212 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.285 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.295 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.367 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.369 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.435 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.438 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.516 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.518 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.583 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.997 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:11:11 compute-0 nova_compute[185389]: 2026-01-26 17:11:11.999 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=72.39682006835938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.000 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.000 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.695 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.696 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.697 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.697 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.847 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.866 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.868 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:11:12 compute-0 nova_compute[185389]: 2026-01-26 17:11:12.868 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:14 compute-0 nova_compute[185389]: 2026-01-26 17:11:14.534 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:15 compute-0 nova_compute[185389]: 2026-01-26 17:11:15.488 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:16 compute-0 nova_compute[185389]: 2026-01-26 17:11:16.863 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:17 compute-0 podman[251381]: 2026-01-26 17:11:17.227235168 +0000 UTC m=+0.100254102 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120)
Jan 26 17:11:17 compute-0 podman[251380]: 2026-01-26 17:11:17.22697376 +0000 UTC m=+0.105809372 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, container_name=openstack_network_exporter)
Jan 26 17:11:17 compute-0 podman[251382]: 2026-01-26 17:11:17.236749816 +0000 UTC m=+0.096560341 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.578 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.579 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.616 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.715 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.716 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.729 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.729 185393 INFO nova.compute.claims [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.908 185393 DEBUG nova.compute.provider_tree [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.927 185393 DEBUG nova.scheduler.client.report [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.956 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:18 compute-0 nova_compute[185389]: 2026-01-26 17:11:18.956 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.113 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.137 185393 INFO nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.185 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.519 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.521 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.522 185393 INFO nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Creating image(s)
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.523 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.524 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.525 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.526 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.527 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.539 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:19 compute-0 nova_compute[185389]: 2026-01-26 17:11:19.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:20 compute-0 podman[251439]: 2026-01-26 17:11:20.211683213 +0000 UTC m=+0.093829388 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:11:20 compute-0 nova_compute[185389]: 2026-01-26 17:11:20.492 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.154 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 podman[251463]: 2026-01-26 17:11:22.194180316 +0000 UTC m=+0.086433667 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.232 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.234 185393 DEBUG nova.virt.images [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] c1c5c49b-a1bf-41a4-8c52-f6be03e2523c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.236 185393 DEBUG nova.privsep.utils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.236 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.part /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.415 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.part /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.converted" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.420 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.480 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.482 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.514 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.591 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.593 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.595 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.620 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.705 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.707 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f,backing_fmt=raw /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.754 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f,backing_fmt=raw /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.756 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "4400b6a020bcbf8c391a49e2af8d38405e8bb73f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.757 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.844 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4400b6a020bcbf8c391a49e2af8d38405e8bb73f --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.846 185393 DEBUG nova.virt.disk.api [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Checking if we can resize image /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.847 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.920 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.922 185393 DEBUG nova.virt.disk.api [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Cannot resize image /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:11:22 compute-0 nova_compute[185389]: 2026-01-26 17:11:22.923 185393 DEBUG nova.objects.instance [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'migration_context' on Instance uuid 8a322d6b-3a53-4389-8cee-ffbe9b632b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.033 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.034 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.034 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.054 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.134 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.136 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.136 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.152 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.230 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.232 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.294 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.eph0 1073741824" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.295 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.297 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.363 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.365 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.365 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Ensure instance console log exists: /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.366 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.366 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.366 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.368 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T17:11:02Z,direct_url=<?>,disk_format='qcow2',id=c1c5c49b-a1bf-41a4-8c52-f6be03e2523c,min_disk=0,min_ram=0,name='fvt_testing_image',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T17:11:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': 'c1c5c49b-a1bf-41a4-8c52-f6be03e2523c'}], 'ephemerals': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.376 185393 WARNING nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.391 185393 DEBUG nova.virt.libvirt.host [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.392 185393 DEBUG nova.virt.libvirt.host [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.397 185393 DEBUG nova.virt.libvirt.host [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.398 185393 DEBUG nova.virt.libvirt.host [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.399 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.399 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:11:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='5e2cd87b-f005-44f3-a5b4-7e17020a4018',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-26T17:11:02Z,direct_url=<?>,disk_format='qcow2',id=c1c5c49b-a1bf-41a4-8c52-f6be03e2523c,min_disk=0,min_ram=0,name='fvt_testing_image',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-26T17:11:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.400 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.400 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.400 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.401 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.401 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.401 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.402 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.402 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.402 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.403 185393 DEBUG nova.virt.hardware [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.408 185393 DEBUG nova.objects.instance [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a322d6b-3a53-4389-8cee-ffbe9b632b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.430 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <uuid>8a322d6b-3a53-4389-8cee-ffbe9b632b0f</uuid>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <name>instance-00000006</name>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <memory>524288</memory>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:name>fvt_testing_server</nova:name>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:11:23</nova:creationTime>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:flavor name="fvt_testing_flavor">
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:memory>512</nova:memory>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:user uuid="3c0ab9326d69400aa6a4a91432885d7f">admin</nova:user>
Jan 26 17:11:23 compute-0 nova_compute[185389]:         <nova:project uuid="aa8f1f3bbce34237a208c8e92ca9286f">admin</nova:project>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="c1c5c49b-a1bf-41a4-8c52-f6be03e2523c"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <nova:ports/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <system>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="serial">8a322d6b-3a53-4389-8cee-ffbe9b632b0f</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="uuid">8a322d6b-3a53-4389-8cee-ffbe9b632b0f</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </system>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <os>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </os>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <features>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </features>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.eph0"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <target dev="vdb" bus="virtio"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.config"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/console.log" append="off"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <video>
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </video>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:11:23 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:11:23 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:11:23 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:11:23 compute-0 nova_compute[185389]: </domain>
Jan 26 17:11:23 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.495 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.496 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.497 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.498 185393 INFO nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Using config drive
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.696 185393 INFO nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Creating config drive at /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.config
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.708 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphmyb960i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:23 compute-0 nova_compute[185389]: 2026-01-26 17:11:23.843 185393 DEBUG oslo_concurrency.processutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphmyb960i" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:23 compute-0 systemd-machined[156679]: New machine qemu-6-instance-00000006.
Jan 26 17:11:23 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.516 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769447484.515783, 8a322d6b-3a53-4389-8cee-ffbe9b632b0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:11:24 compute-0 podman[251549]: 2026-01-26 17:11:24.520819858 +0000 UTC m=+0.097516428 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, version=9.4)
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.520 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] VM Resumed (Lifecycle Event)
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.523 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.526 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.536 185393 INFO nova.virt.libvirt.driver [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Instance spawned successfully.
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.536 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.540 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:24 compute-0 podman[251548]: 2026-01-26 17:11:24.547113882 +0000 UTC m=+0.123864703 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:11:24 compute-0 podman[251546]: 2026-01-26 17:11:24.552691053 +0000 UTC m=+0.137175743 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.554 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.564 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.597 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.598 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769447484.5260932, 8a322d6b-3a53-4389-8cee-ffbe9b632b0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.600 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] VM Started (Lifecycle Event)
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.608 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.608 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.609 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.610 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.610 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.611 185393 DEBUG nova.virt.libvirt.driver [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.673 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.680 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.706 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.731 185393 INFO nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Took 5.21 seconds to spawn the instance on the hypervisor.
Jan 26 17:11:24 compute-0 nova_compute[185389]: 2026-01-26 17:11:24.732 185393 DEBUG nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:11:25 compute-0 nova_compute[185389]: 2026-01-26 17:11:25.017 185393 INFO nova.compute.manager [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Took 6.34 seconds to build instance.
Jan 26 17:11:25 compute-0 nova_compute[185389]: 2026-01-26 17:11:25.172 185393 DEBUG oslo_concurrency.lockutils [None req-e0998030-6491-43b5-aa02-e468b454cd9d 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:25 compute-0 nova_compute[185389]: 2026-01-26 17:11:25.494 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:26 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 17:11:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 17:11:29 compute-0 nova_compute[185389]: 2026-01-26 17:11:29.545 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:29 compute-0 podman[201244]: time="2026-01-26T17:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:11:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:11:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 26 17:11:30 compute-0 nova_compute[185389]: 2026-01-26 17:11:30.498 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:31 compute-0 openstack_network_exporter[204387]: ERROR   17:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:11:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:11:31 compute-0 openstack_network_exporter[204387]: ERROR   17:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:11:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:11:34 compute-0 nova_compute[185389]: 2026-01-26 17:11:34.548 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:35 compute-0 nova_compute[185389]: 2026-01-26 17:11:35.499 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.597 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.598 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.599 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.599 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.600 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.602 185393 INFO nova.compute.manager [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Terminating instance
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.603 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "refresh_cache-8a322d6b-3a53-4389-8cee-ffbe9b632b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.604 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquired lock "refresh_cache-8a322d6b-3a53-4389-8cee-ffbe9b632b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.604 185393 DEBUG nova.network.neutron [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:11:36 compute-0 nova_compute[185389]: 2026-01-26 17:11:36.758 185393 DEBUG nova.network.neutron [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:11:37 compute-0 nova_compute[185389]: 2026-01-26 17:11:37.797 185393 DEBUG nova.network.neutron [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:11:37 compute-0 nova_compute[185389]: 2026-01-26 17:11:37.915 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Releasing lock "refresh_cache-8a322d6b-3a53-4389-8cee-ffbe9b632b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:11:37 compute-0 nova_compute[185389]: 2026-01-26 17:11:37.916 185393 DEBUG nova.compute.manager [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:11:37 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 26 17:11:37 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 14.133s CPU time.
Jan 26 17:11:37 compute-0 systemd-machined[156679]: Machine qemu-6-instance-00000006 terminated.
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.191 185393 INFO nova.virt.libvirt.driver [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Instance destroyed successfully.
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.191 185393 DEBUG nova.objects.instance [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'resources' on Instance uuid 8a322d6b-3a53-4389-8cee-ffbe9b632b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.211 185393 INFO nova.virt.libvirt.driver [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Deleting instance files /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f_del
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.212 185393 INFO nova.virt.libvirt.driver [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Deletion of /var/lib/nova/instances/8a322d6b-3a53-4389-8cee-ffbe9b632b0f_del complete
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.440 185393 INFO nova.compute.manager [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Took 0.52 seconds to destroy the instance on the hypervisor.
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.441 185393 DEBUG oslo.service.loopingcall [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.442 185393 DEBUG nova.compute.manager [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:11:38 compute-0 nova_compute[185389]: 2026-01-26 17:11:38.443 185393 DEBUG nova.network.neutron [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.487 185393 DEBUG nova.network.neutron [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.504 185393 DEBUG nova.network.neutron [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.522 185393 INFO nova.compute.manager [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Took 1.08 seconds to deallocate network for instance.
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.552 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.575 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.575 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.695 185393 DEBUG nova.compute.provider_tree [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.717 185393 DEBUG nova.scheduler.client.report [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.748 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.796 185393 INFO nova.scheduler.client.report [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance 8a322d6b-3a53-4389-8cee-ffbe9b632b0f
Jan 26 17:11:39 compute-0 nova_compute[185389]: 2026-01-26 17:11:39.859 185393 DEBUG oslo_concurrency.lockutils [None req-7812762f-0bfa-464f-8c14-c63bead183f8 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "8a322d6b-3a53-4389-8cee-ffbe9b632b0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:40 compute-0 nova_compute[185389]: 2026-01-26 17:11:40.502 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:44 compute-0 nova_compute[185389]: 2026-01-26 17:11:44.556 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:45 compute-0 nova_compute[185389]: 2026-01-26 17:11:45.505 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:48 compute-0 podman[251641]: 2026-01-26 17:11:48.207326047 +0000 UTC m=+0.074215594 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:11:48 compute-0 podman[251639]: 2026-01-26 17:11:48.212614481 +0000 UTC m=+0.082589732 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 26 17:11:48 compute-0 podman[251640]: 2026-01-26 17:11:48.213927187 +0000 UTC m=+0.085611815 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:11:49 compute-0 nova_compute[185389]: 2026-01-26 17:11:49.561 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:50 compute-0 nova_compute[185389]: 2026-01-26 17:11:50.508 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:51 compute-0 podman[251705]: 2026-01-26 17:11:51.193912226 +0000 UTC m=+0.072539500 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:11:53 compute-0 nova_compute[185389]: 2026-01-26 17:11:53.188 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769447498.1866326, 8a322d6b-3a53-4389-8cee-ffbe9b632b0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:11:53 compute-0 nova_compute[185389]: 2026-01-26 17:11:53.188 185393 INFO nova.compute.manager [-] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] VM Stopped (Lifecycle Event)
Jan 26 17:11:53 compute-0 podman[251727]: 2026-01-26 17:11:53.200906024 +0000 UTC m=+0.068918842 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Jan 26 17:11:53 compute-0 nova_compute[185389]: 2026-01-26 17:11:53.310 185393 DEBUG nova.compute.manager [None req-dce7ecc0-800a-4904-b92e-d9b90783d5e3 - - - - - -] [instance: 8a322d6b-3a53-4389-8cee-ffbe9b632b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:11:54 compute-0 nova_compute[185389]: 2026-01-26 17:11:54.564 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:55 compute-0 sshd-session[251747]: Invalid user config from 176.120.22.13 port 35852
Jan 26 17:11:55 compute-0 podman[251750]: 2026-01-26 17:11:55.224374297 +0000 UTC m=+0.100635642 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:55 compute-0 podman[251751]: 2026-01-26 17:11:55.250291401 +0000 UTC m=+0.122379762 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vendor=Red Hat, Inc., config_id=kepler, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:11:55 compute-0 podman[251749]: 2026-01-26 17:11:55.270800848 +0000 UTC m=+0.144716839 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 17:11:55 compute-0 nova_compute[185389]: 2026-01-26 17:11:55.510 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:55 compute-0 sshd-session[251747]: Connection reset by invalid user config 176.120.22.13 port 35852 [preauth]
Jan 26 17:11:57 compute-0 sshd-session[251141]: Received disconnect from 38.102.83.145 port 51616:11: disconnected by user
Jan 26 17:11:57 compute-0 sshd-session[251141]: Disconnected from user zuul 38.102.83.145 port 51616
Jan 26 17:11:57 compute-0 sshd-session[251138]: pam_unix(sshd:session): session closed for user zuul
Jan 26 17:11:57 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 26 17:11:57 compute-0 systemd[1]: session-30.scope: Consumed 1.196s CPU time.
Jan 26 17:11:57 compute-0 systemd-logind[788]: Session 30 logged out. Waiting for processes to exit.
Jan 26 17:11:57 compute-0 systemd-logind[788]: Removed session 30.
Jan 26 17:11:57 compute-0 sshd-session[251809]: Invalid user supervisor from 176.120.22.13 port 35868
Jan 26 17:11:57 compute-0 sshd-session[251809]: Connection reset by invalid user supervisor 176.120.22.13 port 35868 [preauth]
Jan 26 17:11:59 compute-0 nova_compute[185389]: 2026-01-26 17:11:59.569 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:59 compute-0 podman[201244]: time="2026-01-26T17:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:11:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:11:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:12:00 compute-0 sshd-session[251811]: Invalid user user1 from 176.120.22.13 port 35894
Jan 26 17:12:00 compute-0 nova_compute[185389]: 2026-01-26 17:12:00.511 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:00 compute-0 sshd-session[251811]: Connection reset by invalid user user1 176.120.22.13 port 35894 [preauth]
Jan 26 17:12:01 compute-0 openstack_network_exporter[204387]: ERROR   17:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:12:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:12:01 compute-0 openstack_network_exporter[204387]: ERROR   17:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:12:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:12:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:12:01.762 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:12:01.762 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:12:01.763 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:02 compute-0 sshd-session[251813]: Invalid user mysql from 176.120.22.13 port 35910
Jan 26 17:12:02 compute-0 nova_compute[185389]: 2026-01-26 17:12:02.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:02 compute-0 nova_compute[185389]: 2026-01-26 17:12:02.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:12:02 compute-0 sshd-session[251813]: Connection reset by invalid user mysql 176.120.22.13 port 35910 [preauth]
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.572 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:12:04 compute-0 sshd-session[251815]: Connection reset by authenticating user ftp 176.120.22.13 port 47438 [preauth]
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.987 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.988 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:12:04 compute-0 nova_compute[185389]: 2026-01-26 17:12:04.988 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:12:05 compute-0 nova_compute[185389]: 2026-01-26 17:12:05.513 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.010 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.174 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.175 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.175 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.175 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.175 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:07 compute-0 nova_compute[185389]: 2026-01-26 17:12:07.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:09 compute-0 nova_compute[185389]: 2026-01-26 17:12:09.578 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.515 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.786 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.786 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.786 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.786 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.885 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.963 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:10 compute-0 nova_compute[185389]: 2026-01-26 17:12:10.965 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.054 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.055 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.124 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.126 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.201 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.208 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.274 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.276 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.343 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.344 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.406 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.408 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.480 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.845 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.846 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.3694076538086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.846 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:11 compute-0 nova_compute[185389]: 2026-01-26 17:12:11.847 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.051 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.051 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.052 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.052 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.113 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.129 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.224 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:12:12 compute-0 nova_compute[185389]: 2026-01-26 17:12:12.224 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:14 compute-0 nova_compute[185389]: 2026-01-26 17:12:14.583 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:15 compute-0 nova_compute[185389]: 2026-01-26 17:12:15.518 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:16 compute-0 nova_compute[185389]: 2026-01-26 17:12:16.222 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:18 compute-0 sshd-session[251842]: Accepted publickey for zuul from 38.102.83.145 port 36532 ssh2: RSA SHA256:CwDInbOSxpxqp3mWwtfmY0v0Zi73QXMq6svTI6Qp+40
Jan 26 17:12:18 compute-0 systemd-logind[788]: New session 31 of user zuul.
Jan 26 17:12:18 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 26 17:12:18 compute-0 sshd-session[251842]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 17:12:18 compute-0 sudo[252053]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxjakaahwwkenqdnukspxwksjaonopg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769447538.2279913-61575-158579589268687/AnsiballZ_command.py'
Jan 26 17:12:18 compute-0 sudo[252053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:12:18 compute-0 podman[251995]: 2026-01-26 17:12:18.884614673 +0000 UTC m=+0.094626349 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 17:12:18 compute-0 podman[251996]: 2026-01-26 17:12:18.89812287 +0000 UTC m=+0.100919960 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:12:18 compute-0 podman[251994]: 2026-01-26 17:12:18.917753482 +0000 UTC m=+0.129870435 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, version=9.6)
Jan 26 17:12:19 compute-0 python3[252076]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 17:12:19 compute-0 sudo[252053]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:19 compute-0 nova_compute[185389]: 2026-01-26 17:12:19.588 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:19 compute-0 nova_compute[185389]: 2026-01-26 17:12:19.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:20 compute-0 nova_compute[185389]: 2026-01-26 17:12:20.521 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:20 compute-0 nova_compute[185389]: 2026-01-26 17:12:20.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:22 compute-0 podman[252124]: 2026-01-26 17:12:22.231264079 +0000 UTC m=+0.108123286 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:12:24 compute-0 podman[252149]: 2026-01-26 17:12:24.233879046 +0000 UTC m=+0.104762635 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:12:24 compute-0 nova_compute[185389]: 2026-01-26 17:12:24.592 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:25 compute-0 nova_compute[185389]: 2026-01-26 17:12:25.524 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:26 compute-0 podman[252168]: 2026-01-26 17:12:26.22268796 +0000 UTC m=+0.092196743 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30)
Jan 26 17:12:26 compute-0 podman[252167]: 2026-01-26 17:12:26.248561782 +0000 UTC m=+0.123899974 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:12:26 compute-0 podman[252166]: 2026-01-26 17:12:26.265680426 +0000 UTC m=+0.144554754 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 26 17:12:27 compute-0 sudo[252403]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzwklpwrjmnrtqpxndrdvlctvqupzehh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769447546.449825-61743-122364043531726/AnsiballZ_command.py'
Jan 26 17:12:27 compute-0 sudo[252403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:12:27 compute-0 python3[252405]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 17:12:27 compute-0 sudo[252403]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:29 compute-0 nova_compute[185389]: 2026-01-26 17:12:29.595 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:29 compute-0 podman[201244]: time="2026-01-26T17:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:12:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:12:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 26 17:12:30 compute-0 nova_compute[185389]: 2026-01-26 17:12:30.526 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.351 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.352 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.363 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.366 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:12:31.367996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 openstack_network_exporter[204387]: ERROR   17:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:12:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:12:31 compute-0 openstack_network_exporter[204387]: ERROR   17:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:12:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.458 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.460 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.460 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.558 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.559 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.560 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.562 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:12:31.562096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.562 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.563 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.563 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.564 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:12:31.565487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.566 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.566 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.566 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.567 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:12:31.568847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.575 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.580 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.582 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:12:31.582940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.611 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 58880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.635 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 53270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.636 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.637 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.637 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:12:31.636903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:12:31.638476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.639 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.640 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.640 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:12:31.639642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:12:31.641241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:12:31.642415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.643 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.645 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.645 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:12:31.643700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.646 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:12:31.645083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:12:31.646762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:12:31.647881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:12:31.649459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.649 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:12:31.650854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.653 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:12:31.652159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:12:31.653352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.679 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.705 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.706 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.706 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.707 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.708 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.708 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.708 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:12:31.707288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.710 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:12:31.710093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.710 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.710 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.711 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.711 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.711 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.712 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.713 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.713 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.713 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.715 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.715 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:12:31.712491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.716 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.717 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.717 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.717 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.718 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.718 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:12:31.714912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:12:31.716370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.719 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.720 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.720 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.720 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.720 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:12:31.719546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:12:31.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:12:34 compute-0 nova_compute[185389]: 2026-01-26 17:12:34.597 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:35 compute-0 nova_compute[185389]: 2026-01-26 17:12:35.529 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:37 compute-0 sudo[252619]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuywxahdazkkncxeazqpnbwbqwitupfi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769447556.4435499-61905-170728938516966/AnsiballZ_command.py'
Jan 26 17:12:37 compute-0 sudo[252619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:12:37 compute-0 python3[252621]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 17:12:37 compute-0 sudo[252619]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:39 compute-0 sshd-session[252660]: Invalid user ubuntu from 80.94.92.171 port 59112
Jan 26 17:12:39 compute-0 nova_compute[185389]: 2026-01-26 17:12:39.602 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:39 compute-0 sshd-session[252660]: Connection closed by invalid user ubuntu 80.94.92.171 port 59112 [preauth]
Jan 26 17:12:40 compute-0 nova_compute[185389]: 2026-01-26 17:12:40.532 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:44 compute-0 nova_compute[185389]: 2026-01-26 17:12:44.605 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:45 compute-0 nova_compute[185389]: 2026-01-26 17:12:45.535 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:49 compute-0 podman[252664]: 2026-01-26 17:12:49.226197087 +0000 UTC m=+0.082749586 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:12:49 compute-0 podman[252663]: 2026-01-26 17:12:49.226241068 +0000 UTC m=+0.098377490 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120)
Jan 26 17:12:49 compute-0 podman[252662]: 2026-01-26 17:12:49.247790934 +0000 UTC m=+0.122470566 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Jan 26 17:12:49 compute-0 nova_compute[185389]: 2026-01-26 17:12:49.610 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:50 compute-0 nova_compute[185389]: 2026-01-26 17:12:50.537 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:52 compute-0 sudo[252912]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjtmuniywdjzksacpidjmkzvoslykak ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769447571.8631725-62132-27464030934227/AnsiballZ_command.py'
Jan 26 17:12:52 compute-0 sudo[252912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:12:52 compute-0 podman[252874]: 2026-01-26 17:12:52.55401527 +0000 UTC m=+0.118382433 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:12:52 compute-0 python3[252925]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 17:12:52 compute-0 sudo[252912]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:54 compute-0 nova_compute[185389]: 2026-01-26 17:12:54.614 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:55 compute-0 podman[252962]: 2026-01-26 17:12:55.195291474 +0000 UTC m=+0.078246314 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 17:12:55 compute-0 nova_compute[185389]: 2026-01-26 17:12:55.539 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:57 compute-0 podman[252983]: 2026-01-26 17:12:57.233583701 +0000 UTC m=+0.111766983 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Jan 26 17:12:57 compute-0 podman[252982]: 2026-01-26 17:12:57.237722004 +0000 UTC m=+0.108863865 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 17:12:57 compute-0 podman[252981]: 2026-01-26 17:12:57.248902348 +0000 UTC m=+0.138646084 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:12:59 compute-0 nova_compute[185389]: 2026-01-26 17:12:59.617 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:59 compute-0 podman[201244]: time="2026-01-26T17:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:12:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:12:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 26 17:13:00 compute-0 nova_compute[185389]: 2026-01-26 17:13:00.543 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:01 compute-0 openstack_network_exporter[204387]: ERROR   17:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:13:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:13:01 compute-0 openstack_network_exporter[204387]: ERROR   17:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:13:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:13:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:13:01.764 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:13:01.766 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:13:01.778 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:04 compute-0 nova_compute[185389]: 2026-01-26 17:13:04.620 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:04 compute-0 nova_compute[185389]: 2026-01-26 17:13:04.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:04 compute-0 nova_compute[185389]: 2026-01-26 17:13:04.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:04 compute-0 nova_compute[185389]: 2026-01-26 17:13:04.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:13:05 compute-0 nova_compute[185389]: 2026-01-26 17:13:05.545 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:05 compute-0 nova_compute[185389]: 2026-01-26 17:13:05.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:05 compute-0 nova_compute[185389]: 2026-01-26 17:13:05.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:13:05 compute-0 nova_compute[185389]: 2026-01-26 17:13:05.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:13:07 compute-0 nova_compute[185389]: 2026-01-26 17:13:07.020 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:13:07 compute-0 nova_compute[185389]: 2026-01-26 17:13:07.021 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:13:07 compute-0 nova_compute[185389]: 2026-01-26 17:13:07.021 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:13:07 compute-0 nova_compute[185389]: 2026-01-26 17:13:07.022 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:13:09 compute-0 nova_compute[185389]: 2026-01-26 17:13:09.625 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:10 compute-0 nova_compute[185389]: 2026-01-26 17:13:10.549 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:13 compute-0 nova_compute[185389]: 2026-01-26 17:13:13.954 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.177 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.178 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.179 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.181 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.182 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.183 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.242 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.243 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.244 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.244 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.604 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.630 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.701 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.703 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.771 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.772 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.852 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.854 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.953 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:14 compute-0 nova_compute[185389]: 2026-01-26 17:13:14.961 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.032 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.033 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.098 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.099 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.167 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.168 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.240 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.550 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.653 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.655 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.36935424804688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.655 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.655 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.923 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.924 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.924 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:13:15 compute-0 nova_compute[185389]: 2026-01-26 17:13:15.924 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.005 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.070 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.071 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.089 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.119 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.185 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.212 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.213 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.214 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.214 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.214 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:13:16 compute-0 nova_compute[185389]: 2026-01-26 17:13:16.753 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:13:19 compute-0 nova_compute[185389]: 2026-01-26 17:13:19.634 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:20 compute-0 podman[253073]: 2026-01-26 17:13:20.2376576 +0000 UTC m=+0.104779634 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:13:20 compute-0 podman[253072]: 2026-01-26 17:13:20.251338902 +0000 UTC m=+0.121479158 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 26 17:13:20 compute-0 podman[253071]: 2026-01-26 17:13:20.253923312 +0000 UTC m=+0.123152684 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 17:13:20 compute-0 nova_compute[185389]: 2026-01-26 17:13:20.553 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:21 compute-0 nova_compute[185389]: 2026-01-26 17:13:21.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:21 compute-0 nova_compute[185389]: 2026-01-26 17:13:21.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:21 compute-0 nova_compute[185389]: 2026-01-26 17:13:21.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:21 compute-0 nova_compute[185389]: 2026-01-26 17:13:21.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:13:23 compute-0 podman[253128]: 2026-01-26 17:13:23.197128628 +0000 UTC m=+0.076037726 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:13:24 compute-0 nova_compute[185389]: 2026-01-26 17:13:24.638 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:25 compute-0 nova_compute[185389]: 2026-01-26 17:13:25.555 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:26 compute-0 podman[253152]: 2026-01-26 17:13:26.195780578 +0000 UTC m=+0.079197870 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 17:13:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 17:13:28 compute-0 podman[253173]: 2026-01-26 17:13:28.226466588 +0000 UTC m=+0.096144560 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 26 17:13:28 compute-0 podman[253172]: 2026-01-26 17:13:28.241909377 +0000 UTC m=+0.117923221 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:13:28 compute-0 podman[253174]: 2026-01-26 17:13:28.245568937 +0000 UTC m=+0.104818806 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, managed_by=edpm_ansible)
Jan 26 17:13:29 compute-0 nova_compute[185389]: 2026-01-26 17:13:29.642 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:29 compute-0 podman[201244]: time="2026-01-26T17:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:13:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:13:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4394 "" "Go-http-client/1.1"
Jan 26 17:13:30 compute-0 nova_compute[185389]: 2026-01-26 17:13:30.558 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:31 compute-0 openstack_network_exporter[204387]: ERROR   17:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:13:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:13:31 compute-0 openstack_network_exporter[204387]: ERROR   17:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:13:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:13:33 compute-0 nova_compute[185389]: 2026-01-26 17:13:33.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:34 compute-0 nova_compute[185389]: 2026-01-26 17:13:34.646 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:35 compute-0 nova_compute[185389]: 2026-01-26 17:13:35.561 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:39 compute-0 nova_compute[185389]: 2026-01-26 17:13:39.649 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:40 compute-0 nova_compute[185389]: 2026-01-26 17:13:40.564 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:44 compute-0 systemd[1]: Starting dnf makecache...
Jan 26 17:13:44 compute-0 dnf[253235]: Metadata cache refreshed recently.
Jan 26 17:13:44 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 26 17:13:44 compute-0 systemd[1]: Finished dnf makecache.
Jan 26 17:13:44 compute-0 nova_compute[185389]: 2026-01-26 17:13:44.654 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:45 compute-0 nova_compute[185389]: 2026-01-26 17:13:45.567 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:49 compute-0 nova_compute[185389]: 2026-01-26 17:13:49.658 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:50 compute-0 nova_compute[185389]: 2026-01-26 17:13:50.572 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:51 compute-0 podman[253239]: 2026-01-26 17:13:51.253873986 +0000 UTC m=+0.085693727 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:13:51 compute-0 podman[253238]: 2026-01-26 17:13:51.266361845 +0000 UTC m=+0.099513621 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:13:51 compute-0 podman[253237]: 2026-01-26 17:13:51.273476548 +0000 UTC m=+0.107577381 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 26 17:13:52 compute-0 sshd-session[251845]: Received disconnect from 38.102.83.145 port 36532:11: disconnected by user
Jan 26 17:13:52 compute-0 sshd-session[251845]: Disconnected from user zuul 38.102.83.145 port 36532
Jan 26 17:13:52 compute-0 sshd-session[251842]: pam_unix(sshd:session): session closed for user zuul
Jan 26 17:13:52 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 26 17:13:52 compute-0 systemd[1]: session-31.scope: Consumed 4.364s CPU time.
Jan 26 17:13:52 compute-0 systemd-logind[788]: Session 31 logged out. Waiting for processes to exit.
Jan 26 17:13:52 compute-0 systemd-logind[788]: Removed session 31.
Jan 26 17:13:54 compute-0 podman[253300]: 2026-01-26 17:13:54.188085418 +0000 UTC m=+0.072909380 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:13:54 compute-0 nova_compute[185389]: 2026-01-26 17:13:54.661 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:55 compute-0 nova_compute[185389]: 2026-01-26 17:13:55.574 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:57 compute-0 podman[253325]: 2026-01-26 17:13:57.225517132 +0000 UTC m=+0.093237392 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 17:13:59 compute-0 podman[253344]: 2026-01-26 17:13:59.261005032 +0000 UTC m=+0.117680254 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 17:13:59 compute-0 podman[253343]: 2026-01-26 17:13:59.282904957 +0000 UTC m=+0.138618903 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 26 17:13:59 compute-0 podman[253342]: 2026-01-26 17:13:59.294211404 +0000 UTC m=+0.155509811 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 26 17:13:59 compute-0 nova_compute[185389]: 2026-01-26 17:13:59.663 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:59 compute-0 podman[201244]: time="2026-01-26T17:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:13:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:13:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:14:00 compute-0 nova_compute[185389]: 2026-01-26 17:14:00.576 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:01 compute-0 openstack_network_exporter[204387]: ERROR   17:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:14:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:14:01 compute-0 openstack_network_exporter[204387]: ERROR   17:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:14:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:14:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:14:01.764 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:14:01.765 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:14:01.767 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:04 compute-0 nova_compute[185389]: 2026-01-26 17:14:04.665 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:04 compute-0 nova_compute[185389]: 2026-01-26 17:14:04.750 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:05 compute-0 nova_compute[185389]: 2026-01-26 17:14:05.580 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:05 compute-0 nova_compute[185389]: 2026-01-26 17:14:05.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:05 compute-0 nova_compute[185389]: 2026-01-26 17:14:05.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:05 compute-0 nova_compute[185389]: 2026-01-26 17:14:05.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:14:06 compute-0 nova_compute[185389]: 2026-01-26 17:14:06.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:06 compute-0 nova_compute[185389]: 2026-01-26 17:14:06.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:14:07 compute-0 nova_compute[185389]: 2026-01-26 17:14:07.081 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:14:07 compute-0 nova_compute[185389]: 2026-01-26 17:14:07.082 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:14:07 compute-0 nova_compute[185389]: 2026-01-26 17:14:07.093 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:14:08 compute-0 nova_compute[185389]: 2026-01-26 17:14:08.077 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [{"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:14:08 compute-0 nova_compute[185389]: 2026-01-26 17:14:08.100 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:14:08 compute-0 nova_compute[185389]: 2026-01-26 17:14:08.101 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:14:08 compute-0 nova_compute[185389]: 2026-01-26 17:14:08.101 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:08 compute-0 nova_compute[185389]: 2026-01-26 17:14:08.102 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:09 compute-0 nova_compute[185389]: 2026-01-26 17:14:09.668 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:10 compute-0 nova_compute[185389]: 2026-01-26 17:14:10.583 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.759 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.760 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.857 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.942 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:12 compute-0 nova_compute[185389]: 2026-01-26 17:14:12.944 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.005 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.006 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.077 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.078 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.178 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.186 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.256 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.257 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.320 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.322 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.404 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.406 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.477 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.909 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.910 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4848MB free_disk=72.36937713623047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.910 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:13 compute-0 nova_compute[185389]: 2026-01-26 17:14:13.911 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.023 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.024 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a2578f61-3f19-40f4-a32f-97cf22569550 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.024 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.024 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.079 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.094 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.096 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.096 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:14 compute-0 nova_compute[185389]: 2026-01-26 17:14:14.672 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:15 compute-0 nova_compute[185389]: 2026-01-26 17:14:15.585 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:19 compute-0 nova_compute[185389]: 2026-01-26 17:14:19.676 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:20 compute-0 nova_compute[185389]: 2026-01-26 17:14:20.092 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:20 compute-0 nova_compute[185389]: 2026-01-26 17:14:20.588 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:21 compute-0 nova_compute[185389]: 2026-01-26 17:14:21.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:22 compute-0 podman[253430]: 2026-01-26 17:14:22.19112045 +0000 UTC m=+0.068819508 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:14:22 compute-0 podman[253428]: 2026-01-26 17:14:22.197533015 +0000 UTC m=+0.081537744 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Jan 26 17:14:22 compute-0 podman[253429]: 2026-01-26 17:14:22.204139054 +0000 UTC m=+0.084431393 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:14:24 compute-0 nova_compute[185389]: 2026-01-26 17:14:24.680 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:24 compute-0 nova_compute[185389]: 2026-01-26 17:14:24.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:25 compute-0 podman[253489]: 2026-01-26 17:14:25.202818455 +0000 UTC m=+0.081772191 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:14:25 compute-0 nova_compute[185389]: 2026-01-26 17:14:25.592 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:28 compute-0 podman[253514]: 2026-01-26 17:14:28.226209436 +0000 UTC m=+0.093029265 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:14:29 compute-0 nova_compute[185389]: 2026-01-26 17:14:29.683 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:29 compute-0 podman[201244]: time="2026-01-26T17:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:14:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:14:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4395 "" "Go-http-client/1.1"
Jan 26 17:14:30 compute-0 podman[253533]: 2026-01-26 17:14:30.199769085 +0000 UTC m=+0.084030112 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 17:14:30 compute-0 podman[253532]: 2026-01-26 17:14:30.242272728 +0000 UTC m=+0.124800228 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 17:14:30 compute-0 podman[253534]: 2026-01-26 17:14:30.263606067 +0000 UTC m=+0.130841412 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_id=kepler, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, architecture=x86_64)
Jan 26 17:14:30 compute-0 nova_compute[185389]: 2026-01-26 17:14:30.593 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.351 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.352 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.362 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'name': 'test_0', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.365 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'name': 'vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y', 'flavor': {'id': 'c2a8df4d-a1d7-42a3-8279-8c7de8a1a662', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '718285d9-0264-40f4-9fb3-d2faff180284'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'user_id': '3c0ab9326d69400aa6a4a91432885d7f', 'hostId': '5b2ad2004cb5a5985538ff82d4dda707a9aa9c0c35c745039e18e89b', 'status': 'active', 'metadata': {'metering.server_group': '06b33269-d1c6-4fb9-a44b-be304982a550'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:14:31.366097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 openstack_network_exporter[204387]: ERROR   17:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:14:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:14:31 compute-0 openstack_network_exporter[204387]: ERROR   17:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:14:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.439 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.439 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.440 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.518 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.518 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.519 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.520 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 876270399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.520 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 10769042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.521 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 1221465504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 9811607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.521 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:14:31.520171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:14:31.522613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.523 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.523 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.524 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.524 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.525 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:14:31.525252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.531 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.534 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:14:31.535694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.561 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/cpu volume: 60260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.586 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/cpu volume: 54690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:14:31.587321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:14:31.588782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.590 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:14:31.590205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.592 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:14:31.592062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.592 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes volume: 2538 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:14:31.593793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:14:31.595301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.597 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:14:31.597097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.597 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.598 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:14:31.599390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.600 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.600 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:14:31.601297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.601 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.603 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:14:31.603085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.603 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:14:31.604822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:14:31.606176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.606 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:14:31.607531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.643 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.644 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.678 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.679 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.680 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.681 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.681 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:14:31.680511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.683 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.683 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.683 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.683 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:14:31.682907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.684 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.684 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 331117122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 97972603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.685 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.latency volume: 58692222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.686 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 437272566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.686 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 86953754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.686 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.latency volume: 62824695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:14:31.685336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.687 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:14:31.687447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.688 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.689 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.689 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.689 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.689 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:14:31.688610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.690 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:14:31.690730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.691 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.691 14 DEBUG ceilometer.compute.pollsters [-] 60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.691 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.691 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.692 14 DEBUG ceilometer.compute.pollsters [-] a2578f61-3f19-40f4-a32f-97cf22569550/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:14:31.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:14:34 compute-0 nova_compute[185389]: 2026-01-26 17:14:34.685 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:35 compute-0 nova_compute[185389]: 2026-01-26 17:14:35.595 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:39 compute-0 nova_compute[185389]: 2026-01-26 17:14:39.689 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:40 compute-0 nova_compute[185389]: 2026-01-26 17:14:40.598 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:44 compute-0 nova_compute[185389]: 2026-01-26 17:14:44.693 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:45 compute-0 nova_compute[185389]: 2026-01-26 17:14:45.599 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:49 compute-0 nova_compute[185389]: 2026-01-26 17:14:49.696 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:50 compute-0 nova_compute[185389]: 2026-01-26 17:14:50.603 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:51 compute-0 sshd-session[253593]: Connection closed by authenticating user root 45.148.10.121 port 37972 [preauth]
Jan 26 17:14:53 compute-0 podman[253595]: 2026-01-26 17:14:53.21385082 +0000 UTC m=+0.098407952 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 17:14:53 compute-0 podman[253597]: 2026-01-26 17:14:53.213885531 +0000 UTC m=+0.089815089 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:14:53 compute-0 podman[253596]: 2026-01-26 17:14:53.216919384 +0000 UTC m=+0.094518587 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 17:14:54 compute-0 nova_compute[185389]: 2026-01-26 17:14:54.700 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:55 compute-0 nova_compute[185389]: 2026-01-26 17:14:55.605 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:56 compute-0 podman[253657]: 2026-01-26 17:14:56.27342111 +0000 UTC m=+0.148498822 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:14:59 compute-0 podman[253682]: 2026-01-26 17:14:59.213975513 +0000 UTC m=+0.094037053 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 17:14:59 compute-0 nova_compute[185389]: 2026-01-26 17:14:59.703 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:59 compute-0 podman[201244]: time="2026-01-26T17:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:14:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:14:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 26 17:15:00 compute-0 nova_compute[185389]: 2026-01-26 17:15:00.608 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:01 compute-0 podman[253701]: 2026-01-26 17:15:01.20986521 +0000 UTC m=+0.082301335 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=kepler, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Jan 26 17:15:01 compute-0 podman[253700]: 2026-01-26 17:15:01.226923953 +0000 UTC m=+0.103222913 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 26 17:15:01 compute-0 podman[253699]: 2026-01-26 17:15:01.263170876 +0000 UTC m=+0.143749132 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:15:01 compute-0 openstack_network_exporter[204387]: ERROR   17:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:15:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:15:01 compute-0 openstack_network_exporter[204387]: ERROR   17:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:15:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:15:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:01.766 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:01.766 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:01.767 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:04 compute-0 nova_compute[185389]: 2026-01-26 17:15:04.705 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:04 compute-0 nova_compute[185389]: 2026-01-26 17:15:04.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:05 compute-0 nova_compute[185389]: 2026-01-26 17:15:05.611 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:05 compute-0 nova_compute[185389]: 2026-01-26 17:15:05.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:05 compute-0 nova_compute[185389]: 2026-01-26 17:15:05.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:15:06 compute-0 nova_compute[185389]: 2026-01-26 17:15:06.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.752 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.775 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.776 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.776 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.777 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.777 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.779 185393 INFO nova.compute.manager [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Terminating instance
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.781 185393 DEBUG nova.compute.manager [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:15:07 compute-0 kernel: tap58a644b5-e3 (unregistering): left promiscuous mode
Jan 26 17:15:07 compute-0 NetworkManager[56253]: <info>  [1769447707.8303] device (tap58a644b5-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:15:07 compute-0 ovn_controller[97699]: 2026-01-26T17:15:07Z|00057|binding|INFO|Releasing lport 58a644b5-e3a2-4838-9216-8540447cf0a5 from this chassis (sb_readonly=0)
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:07 compute-0 ovn_controller[97699]: 2026-01-26T17:15:07Z|00058|binding|INFO|Setting lport 58a644b5-e3a2-4838-9216-8540447cf0a5 down in Southbound
Jan 26 17:15:07 compute-0 ovn_controller[97699]: 2026-01-26T17:15:07Z|00059|binding|INFO|Removing iface tap58a644b5-e3 ovn-installed in OVS
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.849 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.860 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:8d:a5 192.168.0.107'], port_security=['fa:16:3e:ac:8d:a5 192.168.0.107'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-2qbervo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-port-2y3pojzsevxv', 'neutron:cidrs': '192.168.0.107/24', 'neutron:device_id': 'a2578f61-3f19-40f4-a32f-97cf22569550', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-2qbervo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-port-2y3pojzsevxv', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=58a644b5-e3a2-4838-9216-8540447cf0a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.862 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 58a644b5-e3a2-4838-9216-8540447cf0a5 in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe unbound from our chassis
Jan 26 17:15:07 compute-0 nova_compute[185389]: 2026-01-26 17:15:07.865 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.865 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74318d1e-b1d8-47d5-8ac3-218d758610fe
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.887 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fb40bc-8acb-4933-bedd-e6c4119e49ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:07 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 26 17:15:07 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4min 36.856s CPU time.
Jan 26 17:15:07 compute-0 systemd-machined[156679]: Machine qemu-4-instance-00000004 terminated.
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.923 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b280eec-5dc1-48d9-9cb9-3cff88cce705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.928 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[3828da7f-d539-447f-8f20-ece79af64cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.958 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a05cfc4-b95a-4bf7-bafd-80e6f5349273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:07 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.978 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a574f2f7-f6d4-47e5-b3b8-bf1c15e8e793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74318d1e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:6c:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410415, 'reachable_time': 17103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253775, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:07.999 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[04c6f1e4-5484-4308-961d-c3fa51bc53ae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410434, 'tstamp': 410434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253776, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap74318d1e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410439, 'tstamp': 410439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253776, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:08.001 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.002 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.009 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:08.010 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74318d1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:08.010 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:08.010 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74318d1e-b0, col_values=(('external_ids', {'iface-id': '6045fbea-609e-4588-93b4-ca6dda4224d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:08.011 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.096 185393 INFO nova.virt.libvirt.driver [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance destroyed successfully.
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.097 185393 DEBUG nova.objects.instance [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'resources' on Instance uuid a2578f61-3f19-40f4-a32f-97cf22569550 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.513 185393 DEBUG nova.virt.libvirt.vif [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vo2qfhx-3o3bmw4mfsy3-eo23jljqgyv6-vnf-pi7veetjym6y',id=4,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:44:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='06b33269-d1c6-4fb9-a44b-be304982a550'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-yv3lpxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:44:04Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDA2ODAzOTY5ODExNTU4MDAzOT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 26 17:15:08 compute-0 nova_compute[185389]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDA2ODAzOTY5ODExNTU4MDAzOT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAwNjgwMzk2OTgxMTU1ODAwMzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMDY4MDM5Njk4MTE1NTgwMDM5PT0tLQo=',user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=a2578f61-3f19-40f4-a32f-97cf22569550,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.514 185393 DEBUG nova.network.os_vif_util [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "58a644b5-e3a2-4838-9216-8540447cf0a5", "address": "fa:16:3e:ac:8d:a5", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58a644b5-e3", "ovs_interfaceid": "58a644b5-e3a2-4838-9216-8540447cf0a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.514 185393 DEBUG nova.network.os_vif_util [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.515 185393 DEBUG os_vif [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.517 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.517 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58a644b5-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.520 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.522 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.523 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.528 185393 INFO os_vif [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:8d:a5,bridge_name='br-int',has_traffic_filtering=True,id=58a644b5-e3a2-4838-9216-8540447cf0a5,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap58a644b5-e3')
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.528 185393 INFO nova.virt.libvirt.driver [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Deleting instance files /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550_del
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.529 185393 INFO nova.virt.libvirt.driver [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Deletion of /var/lib/nova/instances/a2578f61-3f19-40f4-a32f-97cf22569550_del complete
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.633 185393 INFO nova.compute.manager [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.634 185393 DEBUG oslo.service.loopingcall [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.636 185393 DEBUG nova.compute.manager [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.636 185393 DEBUG nova.network.neutron [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:15:08 compute-0 rsyslogd[235842]: message too long (8192) with configured size 8096, begin of message is: 2026-01-26 17:15:08.513 185393 DEBUG nova.virt.libvirt.vif [None req-0ffbd20e-53 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.969 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.971 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.971 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:15:08 compute-0 nova_compute[185389]: 2026-01-26 17:15:08.971 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.137 185393 DEBUG nova.compute.manager [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-vif-unplugged-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.138 185393 DEBUG oslo_concurrency.lockutils [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.139 185393 DEBUG oslo_concurrency.lockutils [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.140 185393 DEBUG oslo_concurrency.lockutils [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.141 185393 DEBUG nova.compute.manager [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] No waiting events found dispatching network-vif-unplugged-58a644b5-e3a2-4838-9216-8540447cf0a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.141 185393 DEBUG nova.compute.manager [req-7c42d2a1-e3d3-4399-aeb0-e99dca0fa065 req-3be600a2-178d-44ab-a56c-5961b92ce8b6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-vif-unplugged-58a644b5-e3a2-4838-9216-8540447cf0a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:15:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:09.444 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:15:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:09.445 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:15:09 compute-0 nova_compute[185389]: 2026-01-26 17:15:09.446 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:09.447 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.289 185393 DEBUG nova.network.neutron [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.318 185393 INFO nova.compute.manager [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Took 1.68 seconds to deallocate network for instance.
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.329 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [{"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.359 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-60ba224f-9c5d-4eb4-b501-66d7339832b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.360 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.360 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.360 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.371 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.372 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.477 185393 DEBUG nova.compute.provider_tree [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.613 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.711 185393 DEBUG nova.scheduler.client.report [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.737 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.773 185393 INFO nova.scheduler.client.report [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance a2578f61-3f19-40f4-a32f-97cf22569550
Jan 26 17:15:10 compute-0 nova_compute[185389]: 2026-01-26 17:15:10.859 185393 DEBUG oslo_concurrency.lockutils [None req-0ffbd20e-5364-4256-b458-6f8b34bf1a79 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.266 185393 DEBUG nova.compute.manager [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.266 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.267 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.267 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a2578f61-3f19-40f4-a32f-97cf22569550-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.267 185393 DEBUG nova.compute.manager [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] No waiting events found dispatching network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.268 185393 WARNING nova.compute.manager [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received unexpected event network-vif-plugged-58a644b5-e3a2-4838-9216-8540447cf0a5 for instance with vm_state deleted and task_state None.
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.268 185393 DEBUG nova.compute.manager [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Received event network-changed-58a644b5-e3a2-4838-9216-8540447cf0a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.269 185393 DEBUG nova.compute.manager [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Refreshing instance network info cache due to event network-changed-58a644b5-e3a2-4838-9216-8540447cf0a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.269 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.269 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.270 185393 DEBUG nova.network.neutron [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Refreshing network info cache for port 58a644b5-e3a2-4838-9216-8540447cf0a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.490 185393 DEBUG nova.network.neutron [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.904 185393 DEBUG nova.network.neutron [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 17:15:11 compute-0 nova_compute[185389]: 2026-01-26 17:15:11.905 185393 DEBUG oslo_concurrency.lockutils [req-b98c6420-9937-4b92-bcf5-6e69a61e0a20 req-4a5da418-58d6-4616-b89a-45d1a0a1cdbf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-a2578f61-3f19-40f4-a32f-97cf22569550" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.521 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.743 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.744 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.744 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.744 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.859 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.926 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.928 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.994 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:13 compute-0 nova_compute[185389]: 2026-01-26 17:15:13.996 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.065 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.066 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.145 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.519 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.521 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5093MB free_disk=72.39187240600586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.522 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.522 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.621 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 60ba224f-9c5d-4eb4-b501-66d7339832b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.622 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.631 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.706 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.731 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.777 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:15:14 compute-0 nova_compute[185389]: 2026-01-26 17:15:14.777 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:15 compute-0 nova_compute[185389]: 2026-01-26 17:15:15.615 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:18 compute-0 nova_compute[185389]: 2026-01-26 17:15:18.523 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:20 compute-0 nova_compute[185389]: 2026-01-26 17:15:20.617 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:20 compute-0 nova_compute[185389]: 2026-01-26 17:15:20.772 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:22 compute-0 nova_compute[185389]: 2026-01-26 17:15:22.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:23 compute-0 nova_compute[185389]: 2026-01-26 17:15:23.094 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769447708.0921702, a2578f61-3f19-40f4-a32f-97cf22569550 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:15:23 compute-0 nova_compute[185389]: 2026-01-26 17:15:23.095 185393 INFO nova.compute.manager [-] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] VM Stopped (Lifecycle Event)
Jan 26 17:15:23 compute-0 nova_compute[185389]: 2026-01-26 17:15:23.125 185393 DEBUG nova.compute.manager [None req-67ed4716-ad1e-4837-a7a3-f9e0baa57df0 - - - - - -] [instance: a2578f61-3f19-40f4-a32f-97cf22569550] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:15:23 compute-0 nova_compute[185389]: 2026-01-26 17:15:23.525 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:24 compute-0 podman[253815]: 2026-01-26 17:15:24.215550644 +0000 UTC m=+0.083506108 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:15:24 compute-0 podman[253814]: 2026-01-26 17:15:24.250373909 +0000 UTC m=+0.126367711 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:15:24 compute-0 podman[253813]: 2026-01-26 17:15:24.254595453 +0000 UTC m=+0.131011236 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Jan 26 17:15:25 compute-0 nova_compute[185389]: 2026-01-26 17:15:25.620 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:27 compute-0 podman[253870]: 2026-01-26 17:15:27.194012457 +0000 UTC m=+0.082966782 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:15:28 compute-0 nova_compute[185389]: 2026-01-26 17:15:28.530 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.718 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.718 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.719 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.719 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.719 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.720 185393 INFO nova.compute.manager [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Terminating instance
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.721 185393 DEBUG nova.compute.manager [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:15:29 compute-0 podman[201244]: time="2026-01-26T17:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:15:29 compute-0 kernel: tap0f88f3ae-fb (unregistering): left promiscuous mode
Jan 26 17:15:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:15:29 compute-0 NetworkManager[56253]: <info>  [1769447729.7723] device (tap0f88f3ae-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:15:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 26 17:15:29 compute-0 ovn_controller[97699]: 2026-01-26T17:15:29Z|00060|binding|INFO|Releasing lport 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f from this chassis (sb_readonly=0)
Jan 26 17:15:29 compute-0 ovn_controller[97699]: 2026-01-26T17:15:29Z|00061|binding|INFO|Setting lport 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f down in Southbound
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.779 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 ovn_controller[97699]: 2026-01-26T17:15:29Z|00062|binding|INFO|Removing iface tap0f88f3ae-fb ovn-installed in OVS
Jan 26 17:15:29 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:29.787 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:51:31 192.168.0.57'], port_security=['fa:16:3e:b0:51:31 192.168.0.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.57/24', 'neutron:device_id': '60ba224f-9c5d-4eb4-b501-66d7339832b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa8f1f3bbce34237a208c8e92ca9286f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6ae7745-53c4-4846-bf8b-0c9f0303bef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1197b65b-eda5-4824-97ab-519748b0b6a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:15:29 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:29.789 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f in datapath 74318d1e-b1d8-47d5-8ac3-218d758610fe unbound from our chassis
Jan 26 17:15:29 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:29.790 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74318d1e-b1d8-47d5-8ac3-218d758610fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:15:29 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:29.791 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[768a1dbe-409d-4089-87b7-10d3186ef7af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:29 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:29.793 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe namespace which is not needed anymore
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.799 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 26 17:15:29 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5min 35.841s CPU time.
Jan 26 17:15:29 compute-0 systemd-machined[156679]: Machine qemu-1-instance-00000001 terminated.
Jan 26 17:15:29 compute-0 podman[253896]: 2026-01-26 17:15:29.864624884 +0000 UTC m=+0.068517091 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.945 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 nova_compute[185389]: 2026-01-26 17:15:29.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [NOTICE]   (238869) : haproxy version is 2.8.14-c23fe91
Jan 26 17:15:29 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [NOTICE]   (238869) : path to executable is /usr/sbin/haproxy
Jan 26 17:15:29 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [WARNING]  (238869) : Exiting Master process...
Jan 26 17:15:29 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [ALERT]    (238869) : Current worker (238871) exited with code 143 (Terminated)
Jan 26 17:15:29 compute-0 neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe[238861]: [WARNING]  (238869) : All workers exited. Exiting... (0)
Jan 26 17:15:29 compute-0 systemd[1]: libpod-808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2.scope: Deactivated successfully.
Jan 26 17:15:29 compute-0 podman[253939]: 2026-01-26 17:15:29.985614908 +0000 UTC m=+0.068610364 container died 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.022 185393 INFO nova.virt.libvirt.driver [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Instance destroyed successfully.
Jan 26 17:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2-userdata-shm.mount: Deactivated successfully.
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.024 185393 DEBUG nova.objects.instance [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lazy-loading 'resources' on Instance uuid 60ba224f-9c5d-4eb4-b501-66d7339832b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-67375470e25eafe029c314f624bc375894da7136f274d4d3e7bfb738006d44cf-merged.mount: Deactivated successfully.
Jan 26 17:15:30 compute-0 podman[253939]: 2026-01-26 17:15:30.037423573 +0000 UTC m=+0.120419029 container cleanup 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:15:30 compute-0 systemd[1]: libpod-conmon-808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2.scope: Deactivated successfully.
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.110 185393 DEBUG nova.virt.libvirt.vif [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='718285d9-0264-40f4-9fb3-d2faff180284',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:37:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa8f1f3bbce34237a208c8e92ca9286f',ramdisk_id='',reservation_id='r-w38kzri4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='718285d9-0264-40f4-9fb3-d2faff180284',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:37:32Z,user_data=None,user_id='3c0ab9326d69400aa6a4a91432885d7f',uuid=60ba224f-9c5d-4eb4-b501-66d7339832b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.111 185393 DEBUG nova.network.os_vif_util [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converting VIF {"id": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "address": "fa:16:3e:b0:51:31", "network": {"id": "74318d1e-b1d8-47d5-8ac3-218d758610fe", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa8f1f3bbce34237a208c8e92ca9286f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f88f3ae-fb", "ovs_interfaceid": "0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.112 185393 DEBUG nova.network.os_vif_util [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.112 185393 DEBUG os_vif [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.114 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.114 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f88f3ae-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.116 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.118 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.121 185393 INFO os_vif [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:51:31,bridge_name='br-int',has_traffic_filtering=True,id=0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f,network=Network(74318d1e-b1d8-47d5-8ac3-218d758610fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f88f3ae-fb')
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.122 185393 INFO nova.virt.libvirt.driver [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Deleting instance files /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9_del
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.123 185393 INFO nova.virt.libvirt.driver [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Deletion of /var/lib/nova/instances/60ba224f-9c5d-4eb4-b501-66d7339832b9_del complete
Jan 26 17:15:30 compute-0 podman[253986]: 2026-01-26 17:15:30.127114257 +0000 UTC m=+0.060574394 container remove 808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.134 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[68584daf-7980-4273-bb5e-37162427aa0a]: (4, ('Mon Jan 26 05:15:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe (808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2)\n808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2\nMon Jan 26 05:15:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe (808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2)\n808f0f01465cd36db48d7b3fc8eaba9d4bc961157ab02085d0b30420ae3887c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.136 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[777da438-75b6-4e44-a0f0-c453492aba85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.137 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74318d1e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.139 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 kernel: tap74318d1e-b0: left promiscuous mode
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.154 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[732d8b5c-7895-405d-a0eb-50ecae856e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.169 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb323c2-4ccb-47f1-ae77-031d57729bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.171 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bf9910-c7e7-448b-9f2c-0bbfcdac6316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.188 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdf3b53-7af1-4fb1-8410-4d8bd97496a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410396, 'reachable_time': 36904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254001, 'error': None, 'target': 'ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.192 185393 INFO nova.compute.manager [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Took 0.47 seconds to destroy the instance on the hypervisor.
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.193 185393 DEBUG oslo.service.loopingcall [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.193 185393 DEBUG nova.compute.manager [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.193 185393 DEBUG nova.network.neutron [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:15:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d74318d1e\x2db1d8\x2d47d5\x2d8ac3\x2d218d758610fe.mount: Deactivated successfully.
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.202 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74318d1e-b1d8-47d5-8ac3-218d758610fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:15:30 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:15:30.203 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d7f7f5-9234-4128-9a4e-7d41ecd281d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.622 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.779 185393 DEBUG nova.compute.manager [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-unplugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.780 185393 DEBUG oslo_concurrency.lockutils [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.780 185393 DEBUG oslo_concurrency.lockutils [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.781 185393 DEBUG oslo_concurrency.lockutils [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.781 185393 DEBUG nova.compute.manager [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] No waiting events found dispatching network-vif-unplugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:15:30 compute-0 nova_compute[185389]: 2026-01-26 17:15:30.782 185393 DEBUG nova.compute.manager [req-aca92994-7280-412c-86d9-a2b4bc168625 req-96571e2a-09e6-457a-998c-6192d4862344 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-unplugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.271 185393 DEBUG nova.network.neutron [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.301 185393 INFO nova.compute.manager [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Took 1.11 seconds to deallocate network for instance.
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.339 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.340 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.402 185393 DEBUG nova.compute.manager [req-48156fff-f8ee-4d8e-a5ff-c7cf78a81082 req-862ba47b-b371-46fd-877d-865c1c93a727 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-deleted-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:31 compute-0 openstack_network_exporter[204387]: ERROR   17:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:15:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:15:31 compute-0 openstack_network_exporter[204387]: ERROR   17:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:15:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.433 185393 DEBUG nova.compute.provider_tree [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.447 185393 DEBUG nova.scheduler.client.report [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.486 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.521 185393 INFO nova.scheduler.client.report [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Deleted allocations for instance 60ba224f-9c5d-4eb4-b501-66d7339832b9
Jan 26 17:15:31 compute-0 nova_compute[185389]: 2026-01-26 17:15:31.581 185393 DEBUG oslo_concurrency.lockutils [None req-f8756d8a-4e33-44ed-9671-69a91c9425a7 3c0ab9326d69400aa6a4a91432885d7f aa8f1f3bbce34237a208c8e92ca9286f - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:32 compute-0 podman[254005]: 2026-01-26 17:15:32.203242802 +0000 UTC m=+0.083736703 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30)
Jan 26 17:15:32 compute-0 podman[254004]: 2026-01-26 17:15:32.207718634 +0000 UTC m=+0.086248752 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:15:32 compute-0 podman[254003]: 2026-01-26 17:15:32.265321507 +0000 UTC m=+0.150857485 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.880 185393 DEBUG nova.compute.manager [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.881 185393 DEBUG oslo_concurrency.lockutils [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.881 185393 DEBUG oslo_concurrency.lockutils [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.881 185393 DEBUG oslo_concurrency.lockutils [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "60ba224f-9c5d-4eb4-b501-66d7339832b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.881 185393 DEBUG nova.compute.manager [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] No waiting events found dispatching network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:15:32 compute-0 nova_compute[185389]: 2026-01-26 17:15:32.881 185393 WARNING nova.compute.manager [req-03c1f2f6-f00d-4a89-bab3-59cba64a03ae req-7fbac8da-ebde-4f2f-9ff1-84afd78ed2a6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Received unexpected event network-vif-plugged-0f88f3ae-fbf1-46df-9a0b-7e52e2439d0f for instance with vm_state deleted and task_state None.
Jan 26 17:15:35 compute-0 nova_compute[185389]: 2026-01-26 17:15:35.118 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:35 compute-0 nova_compute[185389]: 2026-01-26 17:15:35.626 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:40 compute-0 nova_compute[185389]: 2026-01-26 17:15:40.122 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:40 compute-0 nova_compute[185389]: 2026-01-26 17:15:40.628 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:45 compute-0 nova_compute[185389]: 2026-01-26 17:15:45.019 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769447730.0177228, 60ba224f-9c5d-4eb4-b501-66d7339832b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:15:45 compute-0 nova_compute[185389]: 2026-01-26 17:15:45.019 185393 INFO nova.compute.manager [-] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] VM Stopped (Lifecycle Event)
Jan 26 17:15:45 compute-0 nova_compute[185389]: 2026-01-26 17:15:45.043 185393 DEBUG nova.compute.manager [None req-7b06cfa2-5ed7-4bd2-a78c-e5b13fd774c7 - - - - - -] [instance: 60ba224f-9c5d-4eb4-b501-66d7339832b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:15:45 compute-0 nova_compute[185389]: 2026-01-26 17:15:45.123 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:45 compute-0 nova_compute[185389]: 2026-01-26 17:15:45.631 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:50 compute-0 nova_compute[185389]: 2026-01-26 17:15:50.126 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:50 compute-0 nova_compute[185389]: 2026-01-26 17:15:50.632 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:55 compute-0 nova_compute[185389]: 2026-01-26 17:15:55.128 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:55 compute-0 podman[254068]: 2026-01-26 17:15:55.219998813 +0000 UTC m=+0.083206899 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:15:55 compute-0 podman[254067]: 2026-01-26 17:15:55.226383996 +0000 UTC m=+0.091328260 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 26 17:15:55 compute-0 podman[254066]: 2026-01-26 17:15:55.253138802 +0000 UTC m=+0.130526753 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 17:15:55 compute-0 nova_compute[185389]: 2026-01-26 17:15:55.634 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:58 compute-0 podman[254127]: 2026-01-26 17:15:58.188489526 +0000 UTC m=+0.072315024 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:15:59 compute-0 podman[201244]: time="2026-01-26T17:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:15:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:15:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 26 17:16:00 compute-0 nova_compute[185389]: 2026-01-26 17:16:00.131 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:00 compute-0 podman[254148]: 2026-01-26 17:16:00.168744519 +0000 UTC m=+0.059184147 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 17:16:00 compute-0 nova_compute[185389]: 2026-01-26 17:16:00.638 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:01 compute-0 ovn_controller[97699]: 2026-01-26T17:16:01Z|00063|memory_trim|INFO|Detected inactivity (last active 30030 ms ago): trimming memory
Jan 26 17:16:01 compute-0 openstack_network_exporter[204387]: ERROR   17:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:16:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:16:01 compute-0 openstack_network_exporter[204387]: ERROR   17:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:16:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:16:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:16:01.767 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:16:01.768 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:16:01.768 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:03 compute-0 podman[254168]: 2026-01-26 17:16:03.191605199 +0000 UTC m=+0.076413875 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 17:16:03 compute-0 podman[254169]: 2026-01-26 17:16:03.206119282 +0000 UTC m=+0.087102254 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:16:03 compute-0 podman[254167]: 2026-01-26 17:16:03.252328347 +0000 UTC m=+0.143547516 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 17:16:05 compute-0 nova_compute[185389]: 2026-01-26 17:16:05.133 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:05 compute-0 nova_compute[185389]: 2026-01-26 17:16:05.640 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:05 compute-0 nova_compute[185389]: 2026-01-26 17:16:05.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:05 compute-0 nova_compute[185389]: 2026-01-26 17:16:05.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:05 compute-0 nova_compute[185389]: 2026-01-26 17:16:05.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:16:07 compute-0 nova_compute[185389]: 2026-01-26 17:16:07.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:08 compute-0 nova_compute[185389]: 2026-01-26 17:16:08.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:09 compute-0 nova_compute[185389]: 2026-01-26 17:16:09.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:09 compute-0 nova_compute[185389]: 2026-01-26 17:16:09.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:16:09 compute-0 nova_compute[185389]: 2026-01-26 17:16:09.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:16:09 compute-0 nova_compute[185389]: 2026-01-26 17:16:09.737 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:16:10 compute-0 nova_compute[185389]: 2026-01-26 17:16:10.135 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:10 compute-0 nova_compute[185389]: 2026-01-26 17:16:10.642 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:10 compute-0 nova_compute[185389]: 2026-01-26 17:16:10.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:11 compute-0 sshd-session[254231]: Invalid user sol from 80.94.92.171 port 33836
Jan 26 17:16:11 compute-0 sshd-session[254231]: Connection closed by invalid user sol 80.94.92.171 port 33836 [preauth]
Jan 26 17:16:14 compute-0 nova_compute[185389]: 2026-01-26 17:16:14.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:14 compute-0 nova_compute[185389]: 2026-01-26 17:16:14.816 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:14 compute-0 nova_compute[185389]: 2026-01-26 17:16:14.817 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:14 compute-0 nova_compute[185389]: 2026-01-26 17:16:14.818 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:14 compute-0 nova_compute[185389]: 2026-01-26 17:16:14.819 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.138 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.180 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.182 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5348MB free_disk=72.41310501098633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.182 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.182 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.249 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.250 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.281 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.303 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.341 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.342 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:15 compute-0 nova_compute[185389]: 2026-01-26 17:16:15.644 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:20 compute-0 nova_compute[185389]: 2026-01-26 17:16:20.141 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:20 compute-0 nova_compute[185389]: 2026-01-26 17:16:20.337 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:20 compute-0 nova_compute[185389]: 2026-01-26 17:16:20.647 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:24 compute-0 nova_compute[185389]: 2026-01-26 17:16:24.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:24 compute-0 nova_compute[185389]: 2026-01-26 17:16:24.744 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:25 compute-0 nova_compute[185389]: 2026-01-26 17:16:25.144 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:25 compute-0 nova_compute[185389]: 2026-01-26 17:16:25.652 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:26 compute-0 podman[254236]: 2026-01-26 17:16:26.191933645 +0000 UTC m=+0.077030692 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Jan 26 17:16:26 compute-0 podman[254237]: 2026-01-26 17:16:26.205213845 +0000 UTC m=+0.089149121 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4)
Jan 26 17:16:26 compute-0 podman[254238]: 2026-01-26 17:16:26.216402168 +0000 UTC m=+0.097059804 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:16:29 compute-0 podman[254298]: 2026-01-26 17:16:29.183546235 +0000 UTC m=+0.067982876 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:16:29 compute-0 podman[201244]: time="2026-01-26T17:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:16:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:16:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 26 17:16:30 compute-0 nova_compute[185389]: 2026-01-26 17:16:30.146 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:30 compute-0 nova_compute[185389]: 2026-01-26 17:16:30.654 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:31 compute-0 podman[254321]: 2026-01-26 17:16:31.177561411 +0000 UTC m=+0.062130427 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.352 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.353 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:16:31.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:16:31 compute-0 openstack_network_exporter[204387]: ERROR   17:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:16:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:16:31 compute-0 openstack_network_exporter[204387]: ERROR   17:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:16:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:16:34 compute-0 podman[254341]: 2026-01-26 17:16:34.196516524 +0000 UTC m=+0.072438597 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.buildah.version=1.29.0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Jan 26 17:16:34 compute-0 podman[254340]: 2026-01-26 17:16:34.199366761 +0000 UTC m=+0.076016234 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:16:34 compute-0 podman[254339]: 2026-01-26 17:16:34.245609926 +0000 UTC m=+0.130013349 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true)
Jan 26 17:16:35 compute-0 nova_compute[185389]: 2026-01-26 17:16:35.148 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:35 compute-0 nova_compute[185389]: 2026-01-26 17:16:35.656 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:40 compute-0 nova_compute[185389]: 2026-01-26 17:16:40.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:40 compute-0 nova_compute[185389]: 2026-01-26 17:16:40.659 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:45 compute-0 nova_compute[185389]: 2026-01-26 17:16:45.153 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:45 compute-0 nova_compute[185389]: 2026-01-26 17:16:45.662 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:50 compute-0 nova_compute[185389]: 2026-01-26 17:16:50.158 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:50 compute-0 nova_compute[185389]: 2026-01-26 17:16:50.664 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:55 compute-0 nova_compute[185389]: 2026-01-26 17:16:55.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:55 compute-0 nova_compute[185389]: 2026-01-26 17:16:55.667 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:57 compute-0 podman[254403]: 2026-01-26 17:16:57.185210879 +0000 UTC m=+0.070485744 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, managed_by=edpm_ansible)
Jan 26 17:16:57 compute-0 podman[254405]: 2026-01-26 17:16:57.192816755 +0000 UTC m=+0.070236697 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:16:57 compute-0 podman[254404]: 2026-01-26 17:16:57.199585509 +0000 UTC m=+0.079765896 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:16:59 compute-0 podman[201244]: time="2026-01-26T17:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:16:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:16:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Jan 26 17:17:00 compute-0 nova_compute[185389]: 2026-01-26 17:17:00.164 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:00 compute-0 podman[254464]: 2026-01-26 17:17:00.184915889 +0000 UTC m=+0.073890786 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:17:00 compute-0 nova_compute[185389]: 2026-01-26 17:17:00.667 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:01 compute-0 openstack_network_exporter[204387]: ERROR   17:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:17:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:17:01 compute-0 openstack_network_exporter[204387]: ERROR   17:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:17:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:17:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:17:01.769 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:17:01.769 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:17:01.770 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:17:02 compute-0 podman[254487]: 2026-01-26 17:17:02.202392962 +0000 UTC m=+0.089052539 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:17:05 compute-0 nova_compute[185389]: 2026-01-26 17:17:05.166 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:05 compute-0 podman[254509]: 2026-01-26 17:17:05.202290877 +0000 UTC m=+0.076037445 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, release-0.7.12=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:17:05 compute-0 podman[254508]: 2026-01-26 17:17:05.202523123 +0000 UTC m=+0.080902677 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:17:05 compute-0 podman[254507]: 2026-01-26 17:17:05.234699637 +0000 UTC m=+0.115036064 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:17:05 compute-0 nova_compute[185389]: 2026-01-26 17:17:05.670 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:05 compute-0 nova_compute[185389]: 2026-01-26 17:17:05.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:07 compute-0 nova_compute[185389]: 2026-01-26 17:17:07.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:07 compute-0 nova_compute[185389]: 2026-01-26 17:17:07.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:17:09 compute-0 nova_compute[185389]: 2026-01-26 17:17:09.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:09 compute-0 nova_compute[185389]: 2026-01-26 17:17:09.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:17:09 compute-0 nova_compute[185389]: 2026-01-26 17:17:09.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:17:09 compute-0 nova_compute[185389]: 2026-01-26 17:17:09.746 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:17:09 compute-0 nova_compute[185389]: 2026-01-26 17:17:09.746 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:10 compute-0 nova_compute[185389]: 2026-01-26 17:17:10.169 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:10 compute-0 nova_compute[185389]: 2026-01-26 17:17:10.673 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:10 compute-0 nova_compute[185389]: 2026-01-26 17:17:10.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:11 compute-0 nova_compute[185389]: 2026-01-26 17:17:11.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:14 compute-0 nova_compute[185389]: 2026-01-26 17:17:14.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:14 compute-0 nova_compute[185389]: 2026-01-26 17:17:14.849 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:14 compute-0 nova_compute[185389]: 2026-01-26 17:17:14.850 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:14 compute-0 nova_compute[185389]: 2026-01-26 17:17:14.851 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:17:14 compute-0 nova_compute[185389]: 2026-01-26 17:17:14.852 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.171 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.261 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.263 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5350MB free_disk=72.41310501098633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.263 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.264 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.584 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.585 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.608 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.625 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.626 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.626 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:17:15 compute-0 nova_compute[185389]: 2026-01-26 17:17:15.674 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:20 compute-0 nova_compute[185389]: 2026-01-26 17:17:20.174 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:20 compute-0 nova_compute[185389]: 2026-01-26 17:17:20.677 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:22 compute-0 nova_compute[185389]: 2026-01-26 17:17:22.621 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:25 compute-0 nova_compute[185389]: 2026-01-26 17:17:25.178 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:25 compute-0 nova_compute[185389]: 2026-01-26 17:17:25.680 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:25 compute-0 nova_compute[185389]: 2026-01-26 17:17:25.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:28 compute-0 podman[254571]: 2026-01-26 17:17:28.247388905 +0000 UTC m=+0.121489187 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 26 17:17:28 compute-0 podman[254572]: 2026-01-26 17:17:28.257707816 +0000 UTC m=+0.126183736 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:17:28 compute-0 podman[254570]: 2026-01-26 17:17:28.276295471 +0000 UTC m=+0.156907970 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 26 17:17:29 compute-0 podman[201244]: time="2026-01-26T17:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:17:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:17:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 26 17:17:30 compute-0 nova_compute[185389]: 2026-01-26 17:17:30.182 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:30 compute-0 nova_compute[185389]: 2026-01-26 17:17:30.683 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:31 compute-0 podman[254636]: 2026-01-26 17:17:31.191433644 +0000 UTC m=+0.075611473 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:17:31 compute-0 openstack_network_exporter[204387]: ERROR   17:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:17:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:17:31 compute-0 openstack_network_exporter[204387]: ERROR   17:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:17:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:17:33 compute-0 podman[254663]: 2026-01-26 17:17:33.198033571 +0000 UTC m=+0.084994018 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:17:35 compute-0 nova_compute[185389]: 2026-01-26 17:17:35.185 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:35 compute-0 nova_compute[185389]: 2026-01-26 17:17:35.686 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:36 compute-0 podman[254682]: 2026-01-26 17:17:36.213991673 +0000 UTC m=+0.092321477 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:17:36 compute-0 podman[254683]: 2026-01-26 17:17:36.21427805 +0000 UTC m=+0.087191337 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 26 17:17:36 compute-0 podman[254681]: 2026-01-26 17:17:36.23968045 +0000 UTC m=+0.123609916 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:17:40 compute-0 nova_compute[185389]: 2026-01-26 17:17:40.190 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:40 compute-0 nova_compute[185389]: 2026-01-26 17:17:40.686 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:45 compute-0 nova_compute[185389]: 2026-01-26 17:17:45.192 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:45 compute-0 nova_compute[185389]: 2026-01-26 17:17:45.689 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:50 compute-0 nova_compute[185389]: 2026-01-26 17:17:50.196 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:50 compute-0 nova_compute[185389]: 2026-01-26 17:17:50.692 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:55 compute-0 nova_compute[185389]: 2026-01-26 17:17:55.199 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:55 compute-0 nova_compute[185389]: 2026-01-26 17:17:55.695 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:59 compute-0 podman[254743]: 2026-01-26 17:17:59.204753617 +0000 UTC m=+0.082979794 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal)
Jan 26 17:17:59 compute-0 podman[254744]: 2026-01-26 17:17:59.210588825 +0000 UTC m=+0.088251586 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Jan 26 17:17:59 compute-0 podman[254745]: 2026-01-26 17:17:59.21629031 +0000 UTC m=+0.090252581 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:17:59 compute-0 podman[201244]: time="2026-01-26T17:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:17:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:17:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 26 17:18:00 compute-0 nova_compute[185389]: 2026-01-26 17:18:00.202 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:00 compute-0 nova_compute[185389]: 2026-01-26 17:18:00.698 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:01 compute-0 openstack_network_exporter[204387]: ERROR   17:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:18:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:18:01 compute-0 openstack_network_exporter[204387]: ERROR   17:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:18:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:18:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:18:01.771 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:18:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:18:01.772 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:18:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:18:01.772 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:18:02 compute-0 podman[254802]: 2026-01-26 17:18:02.199130612 +0000 UTC m=+0.087783213 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:18:04 compute-0 podman[254825]: 2026-01-26 17:18:04.199893862 +0000 UTC m=+0.087491045 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:18:05 compute-0 nova_compute[185389]: 2026-01-26 17:18:05.205 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:05 compute-0 nova_compute[185389]: 2026-01-26 17:18:05.702 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:06 compute-0 nova_compute[185389]: 2026-01-26 17:18:06.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:07 compute-0 podman[254845]: 2026-01-26 17:18:07.198366228 +0000 UTC m=+0.078768549 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=kepler, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Jan 26 17:18:07 compute-0 podman[254844]: 2026-01-26 17:18:07.204889125 +0000 UTC m=+0.088722509 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:18:07 compute-0 podman[254843]: 2026-01-26 17:18:07.228173777 +0000 UTC m=+0.118565799 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:18:08 compute-0 nova_compute[185389]: 2026-01-26 17:18:08.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:08 compute-0 nova_compute[185389]: 2026-01-26 17:18:08.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:18:09 compute-0 nova_compute[185389]: 2026-01-26 17:18:09.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:09 compute-0 nova_compute[185389]: 2026-01-26 17:18:09.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:18:09 compute-0 nova_compute[185389]: 2026-01-26 17:18:09.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:18:09 compute-0 nova_compute[185389]: 2026-01-26 17:18:09.742 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:18:10 compute-0 nova_compute[185389]: 2026-01-26 17:18:10.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:10 compute-0 nova_compute[185389]: 2026-01-26 17:18:10.704 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:11 compute-0 nova_compute[185389]: 2026-01-26 17:18:11.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:11 compute-0 nova_compute[185389]: 2026-01-26 17:18:11.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:11 compute-0 nova_compute[185389]: 2026-01-26 17:18:11.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.707 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.762 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.762 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.763 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:18:15 compute-0 nova_compute[185389]: 2026-01-26 17:18:15.763 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:18:16 compute-0 nova_compute[185389]: 2026-01-26 17:18:16.826 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:18:16 compute-0 nova_compute[185389]: 2026-01-26 17:18:16.828 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5333MB free_disk=72.41310501098633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:18:16 compute-0 nova_compute[185389]: 2026-01-26 17:18:16.828 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:18:16 compute-0 nova_compute[185389]: 2026-01-26 17:18:16.828 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.146 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.147 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.231 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.310 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.311 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.331 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.373 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.403 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.423 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.425 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:18:17 compute-0 nova_compute[185389]: 2026-01-26 17:18:17.426 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:18:20 compute-0 nova_compute[185389]: 2026-01-26 17:18:20.212 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:20 compute-0 nova_compute[185389]: 2026-01-26 17:18:20.710 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.741 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.741 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.754 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.754 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:23 compute-0 nova_compute[185389]: 2026-01-26 17:18:23.754 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:18:25 compute-0 nova_compute[185389]: 2026-01-26 17:18:25.214 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:25 compute-0 nova_compute[185389]: 2026-01-26 17:18:25.713 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:27 compute-0 nova_compute[185389]: 2026-01-26 17:18:27.762 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:28 compute-0 nova_compute[185389]: 2026-01-26 17:18:28.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:29 compute-0 podman[201244]: time="2026-01-26T17:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:18:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:18:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Jan 26 17:18:30 compute-0 podman[254911]: 2026-01-26 17:18:30.185029562 +0000 UTC m=+0.062811306 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:18:30 compute-0 podman[254909]: 2026-01-26 17:18:30.194485078 +0000 UTC m=+0.079547399 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Jan 26 17:18:30 compute-0 nova_compute[185389]: 2026-01-26 17:18:30.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:30 compute-0 podman[254910]: 2026-01-26 17:18:30.219778915 +0000 UTC m=+0.101524196 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:18:30 compute-0 nova_compute[185389]: 2026-01-26 17:18:30.715 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.354 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.354 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04ce488530>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.366 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:18:31.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:18:31 compute-0 openstack_network_exporter[204387]: ERROR   17:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:18:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:18:31 compute-0 openstack_network_exporter[204387]: ERROR   17:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:18:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:18:33 compute-0 podman[254973]: 2026-01-26 17:18:33.178588555 +0000 UTC m=+0.066044203 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:18:35 compute-0 podman[254997]: 2026-01-26 17:18:35.179322224 +0000 UTC m=+0.066707871 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 17:18:35 compute-0 nova_compute[185389]: 2026-01-26 17:18:35.218 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:35 compute-0 nova_compute[185389]: 2026-01-26 17:18:35.719 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:38 compute-0 podman[255018]: 2026-01-26 17:18:38.196523259 +0000 UTC m=+0.071244225 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, config_id=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:18:38 compute-0 podman[255017]: 2026-01-26 17:18:38.207812316 +0000 UTC m=+0.085037790 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 17:18:38 compute-0 podman[255016]: 2026-01-26 17:18:38.236129764 +0000 UTC m=+0.120512531 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 26 17:18:40 compute-0 nova_compute[185389]: 2026-01-26 17:18:40.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:40 compute-0 nova_compute[185389]: 2026-01-26 17:18:40.723 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:44 compute-0 nova_compute[185389]: 2026-01-26 17:18:44.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:45 compute-0 nova_compute[185389]: 2026-01-26 17:18:45.221 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:45 compute-0 nova_compute[185389]: 2026-01-26 17:18:45.724 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:50 compute-0 nova_compute[185389]: 2026-01-26 17:18:50.223 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:50 compute-0 nova_compute[185389]: 2026-01-26 17:18:50.728 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:55 compute-0 nova_compute[185389]: 2026-01-26 17:18:55.226 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:55 compute-0 nova_compute[185389]: 2026-01-26 17:18:55.731 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:59 compute-0 podman[201244]: time="2026-01-26T17:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:18:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:18:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Jan 26 17:19:00 compute-0 nova_compute[185389]: 2026-01-26 17:19:00.228 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:00 compute-0 nova_compute[185389]: 2026-01-26 17:19:00.734 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:01 compute-0 podman[255081]: 2026-01-26 17:19:01.192627506 +0000 UTC m=+0.069761015 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:19:01 compute-0 podman[255079]: 2026-01-26 17:19:01.20419946 +0000 UTC m=+0.087905058 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:19:01 compute-0 podman[255080]: 2026-01-26 17:19:01.229606919 +0000 UTC m=+0.109299838 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 17:19:01 compute-0 openstack_network_exporter[204387]: ERROR   17:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:19:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:19:01 compute-0 openstack_network_exporter[204387]: ERROR   17:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:19:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:19:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:19:01.772 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:19:01.773 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:19:01.773 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:04 compute-0 podman[255134]: 2026-01-26 17:19:04.164725376 +0000 UTC m=+0.057776830 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:19:05 compute-0 nova_compute[185389]: 2026-01-26 17:19:05.231 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:05 compute-0 nova_compute[185389]: 2026-01-26 17:19:05.746 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:06 compute-0 podman[255158]: 2026-01-26 17:19:06.177598826 +0000 UTC m=+0.070542896 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:19:06 compute-0 nova_compute[185389]: 2026-01-26 17:19:06.744 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:09 compute-0 podman[255178]: 2026-01-26 17:19:09.213115808 +0000 UTC m=+0.097337014 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:19:09 compute-0 podman[255177]: 2026-01-26 17:19:09.220365404 +0000 UTC m=+0.106640536 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:19:09 compute-0 podman[255179]: 2026-01-26 17:19:09.227073987 +0000 UTC m=+0.106453801 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4, config_id=kepler, architecture=x86_64, release=1214.1726694543, vcs-type=git, release-0.7.12=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Jan 26 17:19:10 compute-0 nova_compute[185389]: 2026-01-26 17:19:10.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:10 compute-0 nova_compute[185389]: 2026-01-26 17:19:10.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:10 compute-0 nova_compute[185389]: 2026-01-26 17:19:10.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:19:10 compute-0 nova_compute[185389]: 2026-01-26 17:19:10.754 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.743 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.743 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.744 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:11 compute-0 nova_compute[185389]: 2026-01-26 17:19:11.744 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:15 compute-0 nova_compute[185389]: 2026-01-26 17:19:15.243 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:15 compute-0 nova_compute[185389]: 2026-01-26 17:19:15.758 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:17 compute-0 nova_compute[185389]: 2026-01-26 17:19:17.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:17 compute-0 nova_compute[185389]: 2026-01-26 17:19:17.759 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:17 compute-0 nova_compute[185389]: 2026-01-26 17:19:17.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:17 compute-0 nova_compute[185389]: 2026-01-26 17:19:17.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:17 compute-0 nova_compute[185389]: 2026-01-26 17:19:17.760 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.109 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.110 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5351MB free_disk=72.41310501098633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.110 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.110 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.386 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.387 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.419 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.477 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.479 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:19:18 compute-0 nova_compute[185389]: 2026-01-26 17:19:18.479 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:20 compute-0 nova_compute[185389]: 2026-01-26 17:19:20.245 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:20 compute-0 nova_compute[185389]: 2026-01-26 17:19:20.762 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:24 compute-0 nova_compute[185389]: 2026-01-26 17:19:24.474 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:25 compute-0 nova_compute[185389]: 2026-01-26 17:19:25.249 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:25 compute-0 nova_compute[185389]: 2026-01-26 17:19:25.765 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:27 compute-0 nova_compute[185389]: 2026-01-26 17:19:27.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:29 compute-0 podman[201244]: time="2026-01-26T17:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:19:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:19:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 26 17:19:30 compute-0 nova_compute[185389]: 2026-01-26 17:19:30.251 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:30 compute-0 nova_compute[185389]: 2026-01-26 17:19:30.767 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:31 compute-0 openstack_network_exporter[204387]: ERROR   17:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:19:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:19:31 compute-0 openstack_network_exporter[204387]: ERROR   17:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:19:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:19:32 compute-0 podman[255243]: 2026-01-26 17:19:32.18386687 +0000 UTC m=+0.070560726 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Jan 26 17:19:32 compute-0 podman[255245]: 2026-01-26 17:19:32.198383445 +0000 UTC m=+0.077263319 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:19:32 compute-0 podman[255244]: 2026-01-26 17:19:32.21737586 +0000 UTC m=+0.099832680 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:19:32 compute-0 sshd-session[255241]: Invalid user sol from 80.94.92.171 port 36808
Jan 26 17:19:32 compute-0 sshd-session[255241]: Connection closed by invalid user sol 80.94.92.171 port 36808 [preauth]
Jan 26 17:19:35 compute-0 podman[255307]: 2026-01-26 17:19:35.178925014 +0000 UTC m=+0.068166512 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:19:35 compute-0 nova_compute[185389]: 2026-01-26 17:19:35.254 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:35 compute-0 nova_compute[185389]: 2026-01-26 17:19:35.768 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:37 compute-0 podman[255330]: 2026-01-26 17:19:37.176739325 +0000 UTC m=+0.068703136 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 17:19:40 compute-0 podman[255351]: 2026-01-26 17:19:40.209613145 +0000 UTC m=+0.083329553 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=kepler, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:19:40 compute-0 podman[255350]: 2026-01-26 17:19:40.234236793 +0000 UTC m=+0.111961080 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:19:40 compute-0 nova_compute[185389]: 2026-01-26 17:19:40.255 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:40 compute-0 podman[255349]: 2026-01-26 17:19:40.264502075 +0000 UTC m=+0.146689913 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:19:40 compute-0 nova_compute[185389]: 2026-01-26 17:19:40.770 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:45 compute-0 nova_compute[185389]: 2026-01-26 17:19:45.258 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:45 compute-0 nova_compute[185389]: 2026-01-26 17:19:45.772 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:50 compute-0 nova_compute[185389]: 2026-01-26 17:19:50.261 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:50 compute-0 nova_compute[185389]: 2026-01-26 17:19:50.774 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:55 compute-0 nova_compute[185389]: 2026-01-26 17:19:55.263 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:55 compute-0 nova_compute[185389]: 2026-01-26 17:19:55.775 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:59 compute-0 podman[201244]: time="2026-01-26T17:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:19:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:19:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Jan 26 17:20:00 compute-0 nova_compute[185389]: 2026-01-26 17:20:00.265 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:00 compute-0 nova_compute[185389]: 2026-01-26 17:20:00.778 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:01 compute-0 openstack_network_exporter[204387]: ERROR   17:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:20:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:20:01 compute-0 openstack_network_exporter[204387]: ERROR   17:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:20:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:01.774 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:01.774 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:01.775 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:03 compute-0 podman[255412]: 2026-01-26 17:20:03.185540674 +0000 UTC m=+0.077893906 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Jan 26 17:20:03 compute-0 podman[255413]: 2026-01-26 17:20:03.187544999 +0000 UTC m=+0.076442777 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0)
Jan 26 17:20:03 compute-0 podman[255414]: 2026-01-26 17:20:03.202271118 +0000 UTC m=+0.086908340 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:20:05 compute-0 nova_compute[185389]: 2026-01-26 17:20:05.268 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:05 compute-0 nova_compute[185389]: 2026-01-26 17:20:05.781 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:06 compute-0 podman[255470]: 2026-01-26 17:20:06.208158475 +0000 UTC m=+0.096588044 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:20:06 compute-0 nova_compute[185389]: 2026-01-26 17:20:06.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:08 compute-0 podman[255494]: 2026-01-26 17:20:08.208511556 +0000 UTC m=+0.102789842 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 17:20:10 compute-0 nova_compute[185389]: 2026-01-26 17:20:10.269 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:10 compute-0 nova_compute[185389]: 2026-01-26 17:20:10.784 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:11 compute-0 podman[255513]: 2026-01-26 17:20:11.192853187 +0000 UTC m=+0.074980036 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 17:20:11 compute-0 podman[255514]: 2026-01-26 17:20:11.223338104 +0000 UTC m=+0.104640221 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=kepler, com.redhat.component=ubi9-container, version=9.4, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=)
Jan 26 17:20:11 compute-0 podman[255512]: 2026-01-26 17:20:11.22353815 +0000 UTC m=+0.108306351 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.735 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.736 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.737 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:12 compute-0 nova_compute[185389]: 2026-01-26 17:20:12.737 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:20:13 compute-0 nova_compute[185389]: 2026-01-26 17:20:13.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:13 compute-0 nova_compute[185389]: 2026-01-26 17:20:13.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:15 compute-0 nova_compute[185389]: 2026-01-26 17:20:15.271 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:15 compute-0 nova_compute[185389]: 2026-01-26 17:20:15.786 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:18 compute-0 nova_compute[185389]: 2026-01-26 17:20:18.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:18 compute-0 nova_compute[185389]: 2026-01-26 17:20:18.772 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:18 compute-0 nova_compute[185389]: 2026-01-26 17:20:18.773 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:18 compute-0 nova_compute[185389]: 2026-01-26 17:20:18.773 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:18 compute-0 nova_compute[185389]: 2026-01-26 17:20:18.774 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.138 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.139 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.41310501098633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.140 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.140 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.311 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.311 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.366 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.677 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.679 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:20:19 compute-0 nova_compute[185389]: 2026-01-26 17:20:19.680 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:20 compute-0 nova_compute[185389]: 2026-01-26 17:20:20.274 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:20 compute-0 nova_compute[185389]: 2026-01-26 17:20:20.789 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:25 compute-0 nova_compute[185389]: 2026-01-26 17:20:25.277 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:25 compute-0 nova_compute[185389]: 2026-01-26 17:20:25.675 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:25 compute-0 nova_compute[185389]: 2026-01-26 17:20:25.792 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:28 compute-0 nova_compute[185389]: 2026-01-26 17:20:28.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:28 compute-0 nova_compute[185389]: 2026-01-26 17:20:28.739 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:29 compute-0 podman[201244]: time="2026-01-26T17:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:20:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:20:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 26 17:20:30 compute-0 nova_compute[185389]: 2026-01-26 17:20:30.279 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:30 compute-0 nova_compute[185389]: 2026-01-26 17:20:30.794 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.355 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.355 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04d221b680>] with cache [{}], pollster history [{'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'cpu': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:20:31.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:20:31 compute-0 openstack_network_exporter[204387]: ERROR   17:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:20:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:20:31 compute-0 openstack_network_exporter[204387]: ERROR   17:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:20:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:20:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:32.777 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:20:32 compute-0 nova_compute[185389]: 2026-01-26 17:20:32.778 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:32.778 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:20:34 compute-0 podman[255581]: 2026-01-26 17:20:34.223290128 +0000 UTC m=+0.075170760 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:20:34 compute-0 podman[255580]: 2026-01-26 17:20:34.227519104 +0000 UTC m=+0.089411099 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:20:34 compute-0 podman[255579]: 2026-01-26 17:20:34.23327728 +0000 UTC m=+0.099935714 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Jan 26 17:20:35 compute-0 nova_compute[185389]: 2026-01-26 17:20:35.281 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:35 compute-0 nova_compute[185389]: 2026-01-26 17:20:35.796 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:37 compute-0 podman[255643]: 2026-01-26 17:20:37.224503658 +0000 UTC m=+0.117231493 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:20:39 compute-0 podman[255666]: 2026-01-26 17:20:39.209525332 +0000 UTC m=+0.101763903 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:20:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:20:39.780 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:20:40 compute-0 nova_compute[185389]: 2026-01-26 17:20:40.284 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:40 compute-0 nova_compute[185389]: 2026-01-26 17:20:40.800 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:42 compute-0 podman[255685]: 2026-01-26 17:20:42.218838953 +0000 UTC m=+0.098506368 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=kepler, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4)
Jan 26 17:20:42 compute-0 podman[255684]: 2026-01-26 17:20:42.254302905 +0000 UTC m=+0.132148740 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:20:42 compute-0 podman[255683]: 2026-01-26 17:20:42.280724373 +0000 UTC m=+0.165826446 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 17:20:45 compute-0 nova_compute[185389]: 2026-01-26 17:20:45.288 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:45 compute-0 nova_compute[185389]: 2026-01-26 17:20:45.802 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:50 compute-0 nova_compute[185389]: 2026-01-26 17:20:50.291 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:50 compute-0 nova_compute[185389]: 2026-01-26 17:20:50.806 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:55 compute-0 nova_compute[185389]: 2026-01-26 17:20:55.292 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:55 compute-0 nova_compute[185389]: 2026-01-26 17:20:55.808 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:59 compute-0 podman[201244]: time="2026-01-26T17:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:20:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:20:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 26 17:21:00 compute-0 nova_compute[185389]: 2026-01-26 17:21:00.300 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:00 compute-0 nova_compute[185389]: 2026-01-26 17:21:00.810 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:01 compute-0 openstack_network_exporter[204387]: ERROR   17:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:21:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:21:01 compute-0 openstack_network_exporter[204387]: ERROR   17:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:21:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:01.775 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:01.776 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:01.776 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:02 compute-0 ovn_controller[97699]: 2026-01-26T17:21:02Z|00064|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 26 17:21:05 compute-0 podman[255747]: 2026-01-26 17:21:05.199072161 +0000 UTC m=+0.078656516 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 26 17:21:05 compute-0 podman[255748]: 2026-01-26 17:21:05.205969458 +0000 UTC m=+0.084190297 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 26 17:21:05 compute-0 podman[255749]: 2026-01-26 17:21:05.218736455 +0000 UTC m=+0.093343206 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:21:05 compute-0 nova_compute[185389]: 2026-01-26 17:21:05.303 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:05 compute-0 nova_compute[185389]: 2026-01-26 17:21:05.813 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:08 compute-0 podman[255808]: 2026-01-26 17:21:08.209922952 +0000 UTC m=+0.097363714 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:21:08 compute-0 nova_compute[185389]: 2026-01-26 17:21:08.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:10 compute-0 podman[255831]: 2026-01-26 17:21:10.181243144 +0000 UTC m=+0.074176024 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 17:21:10 compute-0 nova_compute[185389]: 2026-01-26 17:21:10.304 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:10 compute-0 nova_compute[185389]: 2026-01-26 17:21:10.816 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.563 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.754 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.755 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:12 compute-0 nova_compute[185389]: 2026-01-26 17:21:12.756 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:21:13 compute-0 podman[255851]: 2026-01-26 17:21:13.216349856 +0000 UTC m=+0.083438317 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 26 17:21:13 compute-0 podman[255850]: 2026-01-26 17:21:13.222056641 +0000 UTC m=+0.105629129 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:21:13 compute-0 podman[255849]: 2026-01-26 17:21:13.25153085 +0000 UTC m=+0.125928799 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:21:13 compute-0 nova_compute[185389]: 2026-01-26 17:21:13.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:14 compute-0 nova_compute[185389]: 2026-01-26 17:21:14.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:14 compute-0 nova_compute[185389]: 2026-01-26 17:21:14.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:15 compute-0 nova_compute[185389]: 2026-01-26 17:21:15.283 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:15 compute-0 nova_compute[185389]: 2026-01-26 17:21:15.306 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:15 compute-0 nova_compute[185389]: 2026-01-26 17:21:15.384 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:15 compute-0 nova_compute[185389]: 2026-01-26 17:21:15.649 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:15 compute-0 nova_compute[185389]: 2026-01-26 17:21:15.818 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:19 compute-0 nova_compute[185389]: 2026-01-26 17:21:19.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:19 compute-0 nova_compute[185389]: 2026-01-26 17:21:19.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:19 compute-0 nova_compute[185389]: 2026-01-26 17:21:19.760 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:19 compute-0 nova_compute[185389]: 2026-01-26 17:21:19.761 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:19 compute-0 nova_compute[185389]: 2026-01-26 17:21:19.761 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.128 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.129 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.41304397583008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.130 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.130 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.208 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.208 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.243 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.262 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.290 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.292 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.292 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.308 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:20 compute-0 nova_compute[185389]: 2026-01-26 17:21:20.820 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:23 compute-0 nova_compute[185389]: 2026-01-26 17:21:23.036 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:23 compute-0 nova_compute[185389]: 2026-01-26 17:21:23.072 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:24 compute-0 nova_compute[185389]: 2026-01-26 17:21:24.931 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:25 compute-0 nova_compute[185389]: 2026-01-26 17:21:25.310 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:25 compute-0 nova_compute[185389]: 2026-01-26 17:21:25.406 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:25 compute-0 nova_compute[185389]: 2026-01-26 17:21:25.823 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:27 compute-0 nova_compute[185389]: 2026-01-26 17:21:27.287 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:29 compute-0 nova_compute[185389]: 2026-01-26 17:21:29.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:29 compute-0 podman[201244]: time="2026-01-26T17:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:21:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:21:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Jan 26 17:21:30 compute-0 nova_compute[185389]: 2026-01-26 17:21:30.312 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:30 compute-0 nova_compute[185389]: 2026-01-26 17:21:30.825 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:31 compute-0 openstack_network_exporter[204387]: ERROR   17:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:21:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:21:31 compute-0 openstack_network_exporter[204387]: ERROR   17:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:21:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:21:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:32.893 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:21:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:32.894 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:21:32 compute-0 nova_compute[185389]: 2026-01-26 17:21:32.899 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:33 compute-0 nova_compute[185389]: 2026-01-26 17:21:33.683 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:33 compute-0 nova_compute[185389]: 2026-01-26 17:21:33.684 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:33 compute-0 nova_compute[185389]: 2026-01-26 17:21:33.961 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.080 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.081 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.092 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.092 185393 INFO nova.compute.claims [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.250 185393 DEBUG nova.compute.provider_tree [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.267 185393 DEBUG nova.scheduler.client.report [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.307 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.308 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.362 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.363 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.389 185393 INFO nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.417 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.521 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.522 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.523 185393 INFO nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Creating image(s)
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.524 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.524 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.525 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.526 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.526 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:34 compute-0 nova_compute[185389]: 2026-01-26 17:21:34.944 185393 DEBUG nova.policy [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6acd3be55c754b3dbf8ef6c0922b18ae', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.063 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.064 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.080 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.169 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.170 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.177 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.177 185393 INFO nova.compute.claims [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.313 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.321 185393 DEBUG nova.compute.provider_tree [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.337 185393 DEBUG nova.scheduler.client.report [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.364 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.364 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.417 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.417 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.437 185393 INFO nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.458 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.570 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.572 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.572 185393 INFO nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Creating image(s)
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.573 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.573 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.574 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.574 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:35 compute-0 nova_compute[185389]: 2026-01-26 17:21:35.826 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.150 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 podman[255913]: 2026-01-26 17:21:36.190319051 +0000 UTC m=+0.071640987 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:21:36 compute-0 podman[255912]: 2026-01-26 17:21:36.206648753 +0000 UTC m=+0.094100195 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120)
Jan 26 17:21:36 compute-0 podman[255911]: 2026-01-26 17:21:36.213104889 +0000 UTC m=+0.102247047 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.223 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.part --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.225 185393 DEBUG nova.virt.images [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] 90acf026-cf3a-409a-999e-35d89bb9a6bf was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.226 185393 DEBUG nova.privsep.utils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.227 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.part /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.245 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Successfully created port: 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.272 185393 DEBUG nova.policy [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be42df6828874d2e90f3dabbd62031cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18301e8b436a4fa7ba388e173f305ba9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.489 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.part /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.converted" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.495 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.560 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.561 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.575 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.576 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.589 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.606 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.654 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.656 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.657 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.673 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.690 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.691 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.737 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.738 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.782 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.783 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.784 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.801 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.816 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.847 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.849 185393 DEBUG nova.virt.disk.api [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Checking if we can resize image /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.849 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.880 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.882 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.922 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.923 185393 DEBUG nova.virt.disk.api [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Cannot resize image /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.924 185393 DEBUG nova.objects.instance [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'migration_context' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.930 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.931 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.931 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.953 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.954 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Ensure instance console log exists: /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.955 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.956 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.956 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.991 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.992 185393 DEBUG nova.virt.disk.api [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Checking if we can resize image /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:21:36 compute-0 nova_compute[185389]: 2026-01-26 17:21:36.992 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.074 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.075 185393 DEBUG nova.virt.disk.api [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Cannot resize image /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.076 185393 DEBUG nova.objects.instance [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lazy-loading 'migration_context' on Instance uuid cecfd5ba-76f1-47f6-8845-36e6c7ed9773 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.095 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.095 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Ensure instance console log exists: /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.096 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.097 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.097 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.688 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.688 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.709 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.907 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.907 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.915 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:21:37 compute-0 nova_compute[185389]: 2026-01-26 17:21:37.916 185393 INFO nova.compute.claims [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.133 185393 DEBUG nova.compute.provider_tree [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.387 185393 DEBUG nova.scheduler.client.report [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.456 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.457 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.536 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.537 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.565 185393 INFO nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.583 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.674 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.676 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.676 185393 INFO nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Creating image(s)
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.677 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.678 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.678 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.691 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.754 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.755 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.756 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.768 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.832 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.833 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:38 compute-0 nova_compute[185389]: 2026-01-26 17:21:38.926 185393 DEBUG nova.policy [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06957310edd64b7e95b237aa77f5311d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '854cc1d25bbe4358a1a0687611af792e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:21:39 compute-0 podman[256021]: 2026-01-26 17:21:39.197115771 +0000 UTC m=+0.086752016 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.203 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Successfully created port: 9121ca16-ef95-465a-8d54-65a4d9b6659a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.230 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk 1073741824" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.231 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.232 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.296 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.297 185393 DEBUG nova.virt.disk.api [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Checking if we can resize image /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.298 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.361 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.362 185393 DEBUG nova.virt.disk.api [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Cannot resize image /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.363 185393 DEBUG nova.objects.instance [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lazy-loading 'migration_context' on Instance uuid 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.378 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.378 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Ensure instance console log exists: /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.379 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.379 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:39 compute-0 nova_compute[185389]: 2026-01-26 17:21:39.380 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:39.896 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.316 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.321 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Successfully created port: 86c33312-6904-4dd4-9a95-7fd318980439 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.651 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Successfully updated port: 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.672 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.673 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.673 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:21:40 compute-0 nova_compute[185389]: 2026-01-26 17:21:40.829 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:41 compute-0 podman[256051]: 2026-01-26 17:21:41.185313313 +0000 UTC m=+0.080760195 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 26 17:21:41 compute-0 nova_compute[185389]: 2026-01-26 17:21:41.860 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:21:43 compute-0 nova_compute[185389]: 2026-01-26 17:21:43.793 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:44 compute-0 nova_compute[185389]: 2026-01-26 17:21:44.010 185393 DEBUG nova.compute.manager [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-changed-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:44 compute-0 nova_compute[185389]: 2026-01-26 17:21:44.010 185393 DEBUG nova.compute.manager [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Refreshing instance network info cache due to event network-changed-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:21:44 compute-0 nova_compute[185389]: 2026-01-26 17:21:44.011 185393 DEBUG oslo_concurrency.lockutils [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:44 compute-0 podman[256070]: 2026-01-26 17:21:44.217984165 +0000 UTC m=+0.100485478 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 26 17:21:44 compute-0 podman[256071]: 2026-01-26 17:21:44.233530878 +0000 UTC m=+0.111089487 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Jan 26 17:21:44 compute-0 podman[256069]: 2026-01-26 17:21:44.239553691 +0000 UTC m=+0.125513098 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.318 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.655 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Successfully updated port: 86c33312-6904-4dd4-9a95-7fd318980439 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.833 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.920 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.920 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquired lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:45 compute-0 nova_compute[185389]: 2026-01-26 17:21:45.921 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:21:46 compute-0 nova_compute[185389]: 2026-01-26 17:21:46.903 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.001 185393 DEBUG nova.network.neutron [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.025 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Successfully updated port: 9121ca16-ef95-465a-8d54-65a4d9b6659a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.043 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.044 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance network_info: |[{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.044 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.045 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquired lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.045 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.046 185393 DEBUG oslo_concurrency.lockutils [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.046 185393 DEBUG nova.network.neutron [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Refreshing network info cache for port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.049 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start _get_guest_xml network_info=[{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.057 185393 WARNING nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.065 185393 DEBUG nova.virt.libvirt.host [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.066 185393 DEBUG nova.virt.libvirt.host [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.076 185393 DEBUG nova.virt.libvirt.host [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.077 185393 DEBUG nova.virt.libvirt.host [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.078 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.078 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.079 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.079 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.080 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.080 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.080 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.081 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.081 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.081 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.082 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.082 185393 DEBUG nova.virt.hardware [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.086 185393 DEBUG nova.virt.libvirt.vif [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.087 185393 DEBUG nova.network.os_vif_util [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.088 185393 DEBUG nova.network.os_vif_util [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.089 185393 DEBUG nova.objects.instance [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'pci_devices' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.102 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <uuid>186e87cb-beb9-48df-8b10-dfc5c8afe996</uuid>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <name>instance-00000007</name>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:name>tempest-ServerActionsTestJSON-server-34810632</nova:name>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:21:47</nova:creationTime>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:user uuid="6acd3be55c754b3dbf8ef6c0922b18ae">tempest-ServerActionsTestJSON-254851137-project-member</nova:user>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:project uuid="9b9ff6ad3012499db2eb0a82a1ccbcaa">tempest-ServerActionsTestJSON-254851137</nova:project>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         <nova:port uuid="6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3">
Jan 26 17:21:47 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <system>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="serial">186e87cb-beb9-48df-8b10-dfc5c8afe996</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="uuid">186e87cb-beb9-48df-8b10-dfc5c8afe996</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </system>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <os>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </os>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <features>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </features>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:b3:ea:64"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <target dev="tap6e11a3e1-dc"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/console.log" append="off"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <video>
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </video>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:21:47 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:21:47 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:21:47 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:21:47 compute-0 nova_compute[185389]: </domain>
Jan 26 17:21:47 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.104 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Preparing to wait for external event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.105 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.105 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.106 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.106 185393 DEBUG nova.virt.libvirt.vif [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.107 185393 DEBUG nova.network.os_vif_util [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.107 185393 DEBUG nova.network.os_vif_util [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.108 185393 DEBUG os_vif [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.109 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.109 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.109 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.113 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.114 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e11a3e1-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.114 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e11a3e1-dc, col_values=(('external_ids', {'iface-id': '6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:ea:64', 'vm-uuid': '186e87cb-beb9-48df-8b10-dfc5c8afe996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:47 compute-0 NetworkManager[56253]: <info>  [1769448107.1179] manager: (tap6e11a3e1-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.121 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.129 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.131 185393 INFO os_vif [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc')
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.294 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.294 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.294 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] No VIF found with MAC fa:16:3e:b3:ea:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:21:47 compute-0 nova_compute[185389]: 2026-01-26 17:21:47.295 185393 INFO nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Using config drive
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.089 185393 DEBUG nova.compute.manager [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-changed-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.090 185393 DEBUG nova.compute.manager [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Refreshing instance network info cache due to event network-changed-86c33312-6904-4dd4-9a95-7fd318980439. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.090 185393 DEBUG oslo_concurrency.lockutils [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.257 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.358 185393 INFO nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Creating config drive at /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.366 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbew0v87f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.498 185393 DEBUG oslo_concurrency.processutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbew0v87f" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:48 compute-0 kernel: tap6e11a3e1-dc: entered promiscuous mode
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.5724] manager: (tap6e11a3e1-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Jan 26 17:21:48 compute-0 ovn_controller[97699]: 2026-01-26T17:21:48Z|00065|binding|INFO|Claiming lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for this chassis.
Jan 26 17:21:48 compute-0 ovn_controller[97699]: 2026-01-26T17:21:48Z|00066|binding|INFO|6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3: Claiming fa:16:3e:b3:ea:64 10.100.0.5
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.575 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.589 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ea:64 10.100.0.5'], port_security=['fa:16:3e:b3:ea:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '186e87cb-beb9-48df-8b10-dfc5c8afe996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34094d50-e876-4bbe-985c-d748419fede6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0b14c64-3c3f-4e5b-a736-e555c8460dfa, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.591 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ovn_controller[97699]: 2026-01-26T17:21:48Z|00067|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 ovn-installed in OVS
Jan 26 17:21:48 compute-0 ovn_controller[97699]: 2026-01-26T17:21:48Z|00068|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 up in Southbound
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.597 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.592 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 in datapath 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac bound to our chassis
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.595 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.610 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[06bd2cc2-c4c0-43eb-ad07-70968c436e71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.611 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a7c91d4-b1 in ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.614 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a7c91d4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.614 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef9826e-12a1-4d1f-886d-2779e7ff69da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.615 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7a7b84-7950-4b3a-98fa-64a65edeb4ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 systemd-machined[156679]: New machine qemu-7-instance-00000007.
Jan 26 17:21:48 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.630 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8b1611-123d-4f9c-94d0-66666c91e70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 systemd-udevd[256151]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.6480] device (tap6e11a3e1-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.6485] device (tap6e11a3e1-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.657 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[39abd32c-3499-41a2-91b9-afa9de290539]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.694 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[26094c37-033b-4151-82fa-09f5b649ac22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 systemd-udevd[256154]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.701 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e579c576-7222-42df-b831-61560ef434e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.7048] manager: (tap4a7c91d4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.737 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[26e8339b-373c-4691-96d0-76c874928cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.742 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[715d251e-092c-43ba-9b80-1bbe3524fdfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.7681] device (tap4a7c91d4-b0): carrier: link connected
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.772 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[f247bad4-5f5e-4afb-9777-75254006ef91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.790 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[ef69ddce-4700-4901-b94a-decfa9721d68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a7c91d4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:1e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675726, 'reachable_time': 43266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256182, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.809 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bb21cb-7226-4805-875d-6c0adfd83641]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:1e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675726, 'tstamp': 675726}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256184, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.828 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1a22078d-5e73-4802-8463-c2ed129c0212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a7c91d4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:1e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675726, 'reachable_time': 43266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256185, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.865 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8923f95c-4a3b-42a7-bb58-c5e15faf3c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.937 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6c330d-a8e2-4276-9266-114f3da7833d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.940 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a7c91d4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.941 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.942 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a7c91d4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.945 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 NetworkManager[56253]: <info>  [1769448108.9464] manager: (tap4a7c91d4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 26 17:21:48 compute-0 kernel: tap4a7c91d4-b0: entered promiscuous mode
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.950 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.950 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a7c91d4-b0, col_values=(('external_ids', {'iface-id': 'd58b7d53-5cc1-4ed8-aa06-162121fd1800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.952 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.954 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:21:48 compute-0 ovn_controller[97699]: 2026-01-26T17:21:48Z|00069|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.955 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[621dba1a-0e64-4c74-8d10-7bbb3b8e7144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.956 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:21:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:48.958 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'env', 'PROCESS_TAG=haproxy-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:21:48 compute-0 nova_compute[185389]: 2026-01-26 17:21:48.972 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.211 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448109.2099328, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.212 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Started (Lifecycle Event)
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.237 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.245 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448109.2101717, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.246 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Paused (Lifecycle Event)
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.276 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.282 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.328 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:21:49 compute-0 podman[256223]: 2026-01-26 17:21:49.463786188 +0000 UTC m=+0.079806858 container create f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:21:49 compute-0 podman[256223]: 2026-01-26 17:21:49.416983497 +0000 UTC m=+0.033004187 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:21:49 compute-0 systemd[1]: Started libpod-conmon-f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f.scope.
Jan 26 17:21:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bbc1152a2df0776e444e2908cf51ed6f48a74f65b97babea768769da43c12f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:49 compute-0 podman[256223]: 2026-01-26 17:21:49.601470015 +0000 UTC m=+0.217490695 container init f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 17:21:49 compute-0 podman[256223]: 2026-01-26 17:21:49.609630047 +0000 UTC m=+0.225650707 container start f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 17:21:49 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [NOTICE]   (256244) : New worker (256246) forked
Jan 26 17:21:49 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [NOTICE]   (256244) : Loading success.
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.870 185393 DEBUG nova.network.neutron [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Updating instance_info_cache with network_info: [{"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.908 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Releasing lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.909 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Instance network_info: |[{"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.911 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Start _get_guest_xml network_info=[{"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.923 185393 WARNING nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.930 185393 DEBUG nova.virt.libvirt.host [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.931 185393 DEBUG nova.virt.libvirt.host [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.936 185393 DEBUG nova.virt.libvirt.host [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.937 185393 DEBUG nova.virt.libvirt.host [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.937 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.937 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.938 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.938 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.938 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.938 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.939 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.939 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.939 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.939 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.940 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.940 185393 DEBUG nova.virt.hardware [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.943 185393 DEBUG nova.virt.libvirt.vif [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-88613890',display_name='tempest-ServerAddressesTestJSON-server-88613890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-88613890',id=8,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18301e8b436a4fa7ba388e173f305ba9',ramdisk_id='',reservation_id='r-6ngnbhld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-569621097',owner_user_name='tempest-ServerAddressesTestJSON-569621097-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:35Z,user_data=None,user_id='be42df6828874d2e90f3dabbd62031cc',uuid=cecfd5ba-76f1-47f6-8845-36e6c7ed9773,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.944 185393 DEBUG nova.network.os_vif_util [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converting VIF {"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.944 185393 DEBUG nova.network.os_vif_util [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:49 compute-0 nova_compute[185389]: 2026-01-26 17:21:49.945 185393 DEBUG nova.objects.instance [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cecfd5ba-76f1-47f6-8845-36e6c7ed9773 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.078 185393 DEBUG nova.network.neutron [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updating instance_info_cache with network_info: [{"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.119 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <uuid>cecfd5ba-76f1-47f6-8845-36e6c7ed9773</uuid>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <name>instance-00000008</name>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:name>tempest-ServerAddressesTestJSON-server-88613890</nova:name>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:21:49</nova:creationTime>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:user uuid="be42df6828874d2e90f3dabbd62031cc">tempest-ServerAddressesTestJSON-569621097-project-member</nova:user>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:project uuid="18301e8b436a4fa7ba388e173f305ba9">tempest-ServerAddressesTestJSON-569621097</nova:project>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:port uuid="9121ca16-ef95-465a-8d54-65a4d9b6659a">
Jan 26 17:21:50 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <system>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="serial">cecfd5ba-76f1-47f6-8845-36e6c7ed9773</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="uuid">cecfd5ba-76f1-47f6-8845-36e6c7ed9773</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </system>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <os>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </os>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <features>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </features>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.config"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:98:4e:d9"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="tap9121ca16-ef"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/console.log" append="off"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <video>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </video>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:21:50 compute-0 nova_compute[185389]: </domain>
Jan 26 17:21:50 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.119 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Preparing to wait for external event network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.120 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.121 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.121 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.122 185393 DEBUG nova.virt.libvirt.vif [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-88613890',display_name='tempest-ServerAddressesTestJSON-server-88613890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-88613890',id=8,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18301e8b436a4fa7ba388e173f305ba9',ramdisk_id='',reservation_id='r-6ngnbhld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-569621097',owner_user_name='tempest-ServerAddressesTestJSON-569621097-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:35Z,user_data=None,user_id='be42df6828874d2e90f3dabbd62031cc',uuid=cecfd5ba-76f1-47f6-8845-36e6c7ed9773,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.122 185393 DEBUG nova.network.os_vif_util [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converting VIF {"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.123 185393 DEBUG nova.network.os_vif_util [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.123 185393 DEBUG os_vif [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.124 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.124 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.124 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.127 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Releasing lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.127 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Instance network_info: |[{"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.128 185393 DEBUG oslo_concurrency.lockutils [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.128 185393 DEBUG nova.network.neutron [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Refreshing network info cache for port 86c33312-6904-4dd4-9a95-7fd318980439 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.131 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Start _get_guest_xml network_info=[{"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.133 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.133 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9121ca16-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.134 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9121ca16-ef, col_values=(('external_ids', {'iface-id': '9121ca16-ef95-465a-8d54-65a4d9b6659a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:4e:d9', 'vm-uuid': 'cecfd5ba-76f1-47f6-8845-36e6c7ed9773'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.136 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 NetworkManager[56253]: <info>  [1769448110.1378] manager: (tap9121ca16-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.138 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.144 185393 WARNING nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.148 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.150 185393 INFO os_vif [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef')
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.155 185393 DEBUG nova.virt.libvirt.host [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.156 185393 DEBUG nova.virt.libvirt.host [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.166 185393 DEBUG nova.virt.libvirt.host [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.167 185393 DEBUG nova.virt.libvirt.host [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.168 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.168 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.169 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.169 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.170 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.170 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.171 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.171 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.172 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.172 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.173 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.173 185393 DEBUG nova.virt.hardware [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.177 185393 DEBUG nova.virt.libvirt.vif [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-692866047',display_name='tempest-ServersTestManualDisk-server-692866047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-692866047',id=9,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCugjq2hLog8NxEN+V2U0sUpwXrrxhpFq5XCQG80oprZO9bLQcp2/aL0kKNeggZCa078aw+uAob0EH1cHywfjLqiOV4FpNB+Sqw44BwE3DbBn/9eOg+iYYMdGk07/+QebQ==',key_name='tempest-keypair-1760181719',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='854cc1d25bbe4358a1a0687611af792e',ramdisk_id='',reservation_id='r-3n75zltf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-26592744',owner_user_name='tempest-ServersTestManualDisk-26592744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06957310edd64b7e95b237aa77f5311d',uuid=8c28c24a-cab4-43b3-b9ee-4ce40d092c71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.178 185393 DEBUG nova.network.os_vif_util [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converting VIF {"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.179 185393 DEBUG nova.network.os_vif_util [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.180 185393 DEBUG nova.objects.instance [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.195 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <uuid>8c28c24a-cab4-43b3-b9ee-4ce40d092c71</uuid>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <name>instance-00000009</name>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:name>tempest-ServersTestManualDisk-server-692866047</nova:name>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:21:50</nova:creationTime>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:user uuid="06957310edd64b7e95b237aa77f5311d">tempest-ServersTestManualDisk-26592744-project-member</nova:user>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:project uuid="854cc1d25bbe4358a1a0687611af792e">tempest-ServersTestManualDisk-26592744</nova:project>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         <nova:port uuid="86c33312-6904-4dd4-9a95-7fd318980439">
Jan 26 17:21:50 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <system>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="serial">8c28c24a-cab4-43b3-b9ee-4ce40d092c71</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="uuid">8c28c24a-cab4-43b3-b9ee-4ce40d092c71</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </system>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <os>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </os>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <features>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </features>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.config"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:7d:0a:7b"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <target dev="tap86c33312-69"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/console.log" append="off"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <video>
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </video>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:21:50 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:21:50 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:21:50 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:21:50 compute-0 nova_compute[185389]: </domain>
Jan 26 17:21:50 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.196 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Preparing to wait for external event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.196 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.196 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.197 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.197 185393 DEBUG nova.virt.libvirt.vif [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-692866047',display_name='tempest-ServersTestManualDisk-server-692866047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-692866047',id=9,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCugjq2hLog8NxEN+V2U0sUpwXrrxhpFq5XCQG80oprZO9bLQcp2/aL0kKNeggZCa078aw+uAob0EH1cHywfjLqiOV4FpNB+Sqw44BwE3DbBn/9eOg+iYYMdGk07/+QebQ==',key_name='tempest-keypair-1760181719',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='854cc1d25bbe4358a1a0687611af792e',ramdisk_id='',reservation_id='r-3n75zltf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-26592744',owner_user_name='tempest-ServersTestManualDisk-26592744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:21:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06957310edd64b7e95b237aa77f5311d',uuid=8c28c24a-cab4-43b3-b9ee-4ce40d092c71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.198 185393 DEBUG nova.network.os_vif_util [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converting VIF {"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.198 185393 DEBUG nova.network.os_vif_util [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.198 185393 DEBUG os_vif [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.199 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.199 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.200 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.204 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86c33312-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.204 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86c33312-69, col_values=(('external_ids', {'iface-id': '86c33312-6904-4dd4-9a95-7fd318980439', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:0a:7b', 'vm-uuid': '8c28c24a-cab4-43b3-b9ee-4ce40d092c71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 NetworkManager[56253]: <info>  [1769448110.2083] manager: (tap86c33312-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.220 185393 INFO os_vif [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69')
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.224 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.224 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.224 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] No VIF found with MAC fa:16:3e:98:4e:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.225 185393 INFO nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Using config drive
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.321 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.322 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.322 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] No VIF found with MAC fa:16:3e:7d:0a:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.323 185393 INFO nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Using config drive
Jan 26 17:21:50 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 17:21:50 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.836 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.940 185393 DEBUG nova.compute.manager [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Received event network-changed-9121ca16-ef95-465a-8d54-65a4d9b6659a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.941 185393 DEBUG nova.compute.manager [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Refreshing instance network info cache due to event network-changed-9121ca16-ef95-465a-8d54-65a4d9b6659a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.941 185393 DEBUG oslo_concurrency.lockutils [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.941 185393 DEBUG oslo_concurrency.lockutils [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:50 compute-0 nova_compute[185389]: 2026-01-26 17:21:50.942 185393 DEBUG nova.network.neutron [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Refreshing network info cache for port 9121ca16-ef95-465a-8d54-65a4d9b6659a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.017 185393 INFO nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Creating config drive at /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.config
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.024 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9_tgxra execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.057 185393 INFO nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Creating config drive at /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.config
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.071 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5uxqwx2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.163 185393 DEBUG oslo_concurrency.processutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9_tgxra" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.198 185393 DEBUG nova.compute.manager [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.198 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.198 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.199 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.199 185393 DEBUG nova.compute.manager [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Processing event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.199 185393 DEBUG nova.compute.manager [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.199 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.200 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.200 185393 DEBUG oslo_concurrency.lockutils [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.200 185393 DEBUG nova.compute.manager [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] No waiting events found dispatching network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.200 185393 WARNING nova.compute.manager [req-488b7c56-c84b-4600-97fd-a814dd920456 req-05e1f2e7-4f28-4fe9-b696-56f3670c3235 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received unexpected event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for instance with vm_state building and task_state spawning.
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.201 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.202 185393 DEBUG oslo_concurrency.processutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5uxqwx2b" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.218 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448111.2174826, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.219 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Resumed (Lifecycle Event)
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.223 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.2251] manager: (tap9121ca16-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Jan 26 17:21:51 compute-0 systemd-udevd[256179]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:21:51 compute-0 kernel: tap9121ca16-ef: entered promiscuous mode
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.231 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00070|binding|INFO|Claiming lport 9121ca16-ef95-465a-8d54-65a4d9b6659a for this chassis.
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00071|binding|INFO|9121ca16-ef95-465a-8d54-65a4d9b6659a: Claiming fa:16:3e:98:4e:d9 10.100.0.10
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.240 185393 INFO nova.virt.libvirt.driver [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance spawned successfully.
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.241 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.2466] device (tap9121ca16-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.2480] device (tap9121ca16-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.255 185393 DEBUG nova.network.neutron [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updated VIF entry in instance network info cache for port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00072|binding|INFO|Setting lport 9121ca16-ef95-465a-8d54-65a4d9b6659a ovn-installed in OVS
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.256 185393 DEBUG nova.network.neutron [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.258 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 systemd-machined[156679]: New machine qemu-8-instance-00000008.
Jan 26 17:21:51 compute-0 kernel: tap86c33312-69: entered promiscuous mode
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.2913] manager: (tap86c33312-69): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.299 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00073|binding|INFO|Setting lport 9121ca16-ef95-465a-8d54-65a4d9b6659a up in Southbound
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00074|if_status|INFO|Not updating pb chassis for 86c33312-6904-4dd4-9a95-7fd318980439 now as sb is readonly
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.296 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:4e:d9 10.100.0.10'], port_security=['fa:16:3e:98:4e:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'cecfd5ba-76f1-47f6-8845-36e6c7ed9773', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18301e8b436a4fa7ba388e173f305ba9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e989ead-79f9-412e-82f2-4db0d9019b04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7180414-3027-43db-8f29-4631defad8ff, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=9121ca16-ef95-465a-8d54-65a4d9b6659a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.298 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 9121ca16-ef95-465a-8d54-65a4d9b6659a in datapath 11ede1e9-a5f0-4f1a-82c2-9705645b0db8 bound to our chassis
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.302 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11ede1e9-a5f0-4f1a-82c2-9705645b0db8
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.3131] device (tap86c33312-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.3139] device (tap86c33312-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.319 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.327 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00075|binding|INFO|Claiming lport 86c33312-6904-4dd4-9a95-7fd318980439 for this chassis.
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00076|binding|INFO|86c33312-6904-4dd4-9a95-7fd318980439: Claiming fa:16:3e:7d:0a:7b 10.100.0.12
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.319 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9b0e8e-664c-490b-8865-791e88441a16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.321 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap11ede1e9-a1 in ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.323 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap11ede1e9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.324 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[90056d74-39e1-4c29-bfd0-93c8bc054f94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00077|binding|INFO|Setting lport 86c33312-6904-4dd4-9a95-7fd318980439 ovn-installed in OVS
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.331 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.332 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f18669a1-ec12-4f29-a382-8bbfa0823e9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.334 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00078|binding|INFO|Setting lport 86c33312-6904-4dd4-9a95-7fd318980439 up in Southbound
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.336 185393 DEBUG oslo_concurrency.lockutils [req-2c2d2b89-902d-4973-ad6c-cb9d20d855ff req-54112d85-6aca-4a9c-935a-fbf03401f980 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.336 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:0a:7b 10.100.0.12'], port_security=['fa:16:3e:7d:0a:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8c28c24a-cab4-43b3-b9ee-4ce40d092c71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92052205-69bb-42de-8996-b5b0b55d3221', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '854cc1d25bbe4358a1a0687611af792e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adc62df8-ace5-4031-9f17-e384cbe29eb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3aab8cb-6a21-458a-ad7d-d889e3560e0b, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=86c33312-6904-4dd4-9a95-7fd318980439) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:21:51 compute-0 systemd-machined[156679]: New machine qemu-9-instance-00000009.
Jan 26 17:21:51 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.345 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[75aa44cf-eb70-4ac0-b188-a18680bb86e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.361 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.362 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.363 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.364 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.364 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.365 185393 DEBUG nova.virt.libvirt.driver [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.371 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.370 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0da342-18c6-45e1-83b6-de7a1cfe946c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.412 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1367914-1bcb-4ff2-aa39-6d04595e3e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.418 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60dced-476d-402f-8a62-df7a2b08e855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.4215] manager: (tap11ede1e9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.425 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.450 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9d34f1-73d8-4276-a4a4-1461fca514d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.455 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad764aed-9029-47cc-83e5-bd35e0499066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.4899] device (tap11ede1e9-a0): carrier: link connected
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.495 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[e61b6f26-eea7-4e87-8194-65ca6fd13007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.516 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa3e547-f693-418e-9b2f-06fee7270ccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11ede1e9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:07:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675998, 'reachable_time': 37334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256341, 'error': None, 'target': 'ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.535 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0afe7b86-af43-466b-9be5-9c444618769e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:767'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675998, 'tstamp': 675998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256342, 'error': None, 'target': 'ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.552 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b1e30-db21-4751-bd20-699441fe3a4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11ede1e9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:07:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675998, 'reachable_time': 37334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256343, 'error': None, 'target': 'ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.589 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[93a507be-721d-44f6-940e-f61dab0cc633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.646 185393 INFO nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Took 17.12 seconds to spawn the instance on the hypervisor.
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.647 185393 DEBUG nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.650 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0137a15b-70ad-4768-b762-4000d3afcbdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.660 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11ede1e9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.666 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.667 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11ede1e9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:51 compute-0 NetworkManager[56253]: <info>  [1769448111.6715] manager: (tap11ede1e9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 26 17:21:51 compute-0 kernel: tap11ede1e9-a0: entered promiscuous mode
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.678 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.682 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11ede1e9-a0, col_values=(('external_ids', {'iface-id': 'cc3400a9-fad2-42f1-bf99-972bf42762ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:51 compute-0 ovn_controller[97699]: 2026-01-26T17:21:51Z|00079|binding|INFO|Releasing lport cc3400a9-fad2-42f1-bf99-972bf42762ba from this chassis (sb_readonly=0)
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.685 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.687 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/11ede1e9-a5f0-4f1a-82c2-9705645b0db8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/11ede1e9-a5f0-4f1a-82c2-9705645b0db8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.688 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e6580c00-df29-441d-98bb-0ab3fe3f9520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.689 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-11ede1e9-a5f0-4f1a-82c2-9705645b0db8
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/11ede1e9-a5f0-4f1a-82c2-9705645b0db8.pid.haproxy
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 11ede1e9-a5f0-4f1a-82c2-9705645b0db8
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:21:51 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:51.690 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'env', 'PROCESS_TAG=haproxy-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/11ede1e9-a5f0-4f1a-82c2-9705645b0db8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.711 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.930 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448111.9304802, 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:51 compute-0 nova_compute[185389]: 2026-01-26 17:21:51.931 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] VM Started (Lifecycle Event)
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.088 185393 INFO nova.compute.manager [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Took 18.04 seconds to build instance.
Jan 26 17:21:52 compute-0 podman[256381]: 2026-01-26 17:21:52.19363178 +0000 UTC m=+0.088260186 container create 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 17:21:52 compute-0 podman[256381]: 2026-01-26 17:21:52.152998698 +0000 UTC m=+0.047627124 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:21:52 compute-0 systemd[1]: Started libpod-conmon-261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4.scope.
Jan 26 17:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375377a2b5b448accdea7fe4c5d3e2ad60d231cd84af3ff1d661b05e7f52f37b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.328 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:52 compute-0 podman[256381]: 2026-01-26 17:21:52.336849208 +0000 UTC m=+0.231477644 container init 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.337 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448111.9314976, 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.337 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] VM Paused (Lifecycle Event)
Jan 26 17:21:52 compute-0 podman[256381]: 2026-01-26 17:21:52.346381197 +0000 UTC m=+0.241009603 container start 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 17:21:52 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [NOTICE]   (256401) : New worker (256403) forked
Jan 26 17:21:52 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [NOTICE]   (256401) : Loading success.
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.432 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.439 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.486 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 86c33312-6904-4dd4-9a95-7fd318980439 in datapath 92052205-69bb-42de-8996-b5b0b55d3221 unbound from our chassis
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.489 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92052205-69bb-42de-8996-b5b0b55d3221
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.500 185393 DEBUG oslo_concurrency.lockutils [None req-5d10990b-b37e-4986-b240-9279dfdeae7b 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.501 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9d5f8d-4626-4bb7-83f2-1ec1c44fde2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.502 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92052205-61 in ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.504 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92052205-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.504 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7364aa90-6ed5-4bf5-8579-deab43969ef8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.506 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[934b5b61-e315-4a9d-bef1-a92d9d718f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.519 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[03f1c1b2-6aff-48d4-bb71-e9a16da8f51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.545 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cb69e050-296a-4864-a0de-832683039f9e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.576 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[738c592a-79c4-484e-8d30-d2e01f983990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.593 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[63f3af37-345c-40b5-ad94-f0f5d2a4f075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 NetworkManager[56253]: <info>  [1769448112.5974] manager: (tap92052205-60): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 26 17:21:52 compute-0 systemd-udevd[256359]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.628 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[b845d326-ff86-4ada-b9bf-9655997046c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.632 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[d07f2c52-183f-4cf6-8b94-77cac1d9f155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.657 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:21:52 compute-0 NetworkManager[56253]: <info>  [1769448112.6638] device (tap92052205-60): carrier: link connected
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.670 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[99e2776d-6523-4da7-a610-b2f068ec3d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.696 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5580e3-5236-468c-8a5b-92bbd36b152c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92052205-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 28969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256445, 'error': None, 'target': 'ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.717 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6e0f93-a23e-4ebd-93f0-338b981235b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:918'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676116, 'tstamp': 676116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256447, 'error': None, 'target': 'ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.726 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448112.7244344, cecfd5ba-76f1-47f6-8845-36e6c7ed9773 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.726 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] VM Started (Lifecycle Event)
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.736 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b31ac970-f8e5-4ecb-b4e2-e7f644a27ce5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92052205-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 28969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256448, 'error': None, 'target': 'ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.748 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.760 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448112.7244976, cecfd5ba-76f1-47f6-8845-36e6c7ed9773 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.760 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] VM Paused (Lifecycle Event)
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.777 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[46cb8018-c7bc-42e7-91ad-529bf2adbba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.839 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3e37e7ed-c713-42b4-9974-940ec748efdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.840 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92052205-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.840 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.841 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92052205-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:52 compute-0 kernel: tap92052205-60: entered promiscuous mode
Jan 26 17:21:52 compute-0 NetworkManager[56253]: <info>  [1769448112.8443] manager: (tap92052205-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.848 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92052205-60, col_values=(('external_ids', {'iface-id': 'ec436a0a-dbef-4a50-8041-14aa7a52d155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.848 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.849 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:52 compute-0 ovn_controller[97699]: 2026-01-26T17:21:52Z|00080|binding|INFO|Releasing lport ec436a0a-dbef-4a50-8041-14aa7a52d155 from this chassis (sb_readonly=0)
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.851 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92052205-69bb-42de-8996-b5b0b55d3221.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92052205-69bb-42de-8996-b5b0b55d3221.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.856 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[140da5b0-3d57-49bb-89b7-fe53fc9aede3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.857 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-92052205-69bb-42de-8996-b5b0b55d3221
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/92052205-69bb-42de-8996-b5b0b55d3221.pid.haproxy
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 92052205-69bb-42de-8996-b5b0b55d3221
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:21:52 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:21:52.858 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221', 'env', 'PROCESS_TAG=haproxy-92052205-69bb-42de-8996-b5b0b55d3221', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92052205-69bb-42de-8996-b5b0b55d3221.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.861 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.870 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:52 compute-0 nova_compute[185389]: 2026-01-26 17:21:52.893 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:21:53 compute-0 podman[256480]: 2026-01-26 17:21:53.285467439 +0000 UTC m=+0.063590487 container create 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 17:21:53 compute-0 systemd[1]: Started libpod-conmon-8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8.scope.
Jan 26 17:21:53 compute-0 podman[256480]: 2026-01-26 17:21:53.256224075 +0000 UTC m=+0.034347153 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:21:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6357b9af614135871e09f5e67e46dbdfa598f60a9ee0e4d9100e7dbb9be49d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:53 compute-0 podman[256480]: 2026-01-26 17:21:53.409503546 +0000 UTC m=+0.187626614 container init 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 17:21:53 compute-0 podman[256480]: 2026-01-26 17:21:53.417631157 +0000 UTC m=+0.195754205 container start 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:21:53 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [NOTICE]   (256499) : New worker (256501) forked
Jan 26 17:21:53 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [NOTICE]   (256499) : Loading success.
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.287 185393 DEBUG nova.network.neutron [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updated VIF entry in instance network info cache for port 86c33312-6904-4dd4-9a95-7fd318980439. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.288 185393 DEBUG nova.network.neutron [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updating instance_info_cache with network_info: [{"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.309 185393 DEBUG oslo_concurrency.lockutils [req-b42ccb82-f806-45f0-baf0-8570ea3d51e1 req-fa2f9b72-522f-4d9a-8515-0798cdfe50e7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.749 185393 DEBUG nova.network.neutron [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Updated VIF entry in instance network info cache for port 9121ca16-ef95-465a-8d54-65a4d9b6659a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.750 185393 DEBUG nova.network.neutron [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Updating instance_info_cache with network_info: [{"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:54 compute-0 nova_compute[185389]: 2026-01-26 17:21:54.773 185393 DEBUG oslo_concurrency.lockutils [req-db77d10c-98f3-44cf-b6fc-9938a785f63e req-87628199-f588-4029-b925-8327056194f1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-cecfd5ba-76f1-47f6-8845-36e6c7ed9773" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:55 compute-0 nova_compute[185389]: 2026-01-26 17:21:55.206 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:55 compute-0 nova_compute[185389]: 2026-01-26 17:21:55.840 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:57 compute-0 nova_compute[185389]: 2026-01-26 17:21:57.457 185393 DEBUG nova.compute.manager [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-changed-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:21:57 compute-0 nova_compute[185389]: 2026-01-26 17:21:57.458 185393 DEBUG nova.compute.manager [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Refreshing instance network info cache due to event network-changed-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:21:57 compute-0 nova_compute[185389]: 2026-01-26 17:21:57.459 185393 DEBUG oslo_concurrency.lockutils [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:21:57 compute-0 nova_compute[185389]: 2026-01-26 17:21:57.460 185393 DEBUG oslo_concurrency.lockutils [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:21:57 compute-0 nova_compute[185389]: 2026-01-26 17:21:57.460 185393 DEBUG nova.network.neutron [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Refreshing network info cache for port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:21:59 compute-0 ovn_controller[97699]: 2026-01-26T17:21:59Z|00081|binding|INFO|Releasing lport ec436a0a-dbef-4a50-8041-14aa7a52d155 from this chassis (sb_readonly=0)
Jan 26 17:21:59 compute-0 ovn_controller[97699]: 2026-01-26T17:21:59Z|00082|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:21:59 compute-0 ovn_controller[97699]: 2026-01-26T17:21:59Z|00083|binding|INFO|Releasing lport cc3400a9-fad2-42f1-bf99-972bf42762ba from this chassis (sb_readonly=0)
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.270 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:59 compute-0 podman[201244]: time="2026-01-26T17:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:21:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 26 17:21:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5309 "" "Go-http-client/1.1"
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.904 185393 DEBUG nova.network.neutron [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updated VIF entry in instance network info cache for port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.905 185393 DEBUG nova.network.neutron [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.938 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.939 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.958 185393 DEBUG oslo_concurrency.lockutils [req-97c769d5-3c2d-40a6-9289-72afe6d16f37 req-f9a5316a-3d19-4745-b373-f04a660a4747 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:21:59 compute-0 nova_compute[185389]: 2026-01-26 17:21:59.964 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.153 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.154 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.163 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.163 185393 INFO nova.compute.claims [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.532 185393 DEBUG nova.compute.provider_tree [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.727 185393 DEBUG nova.scheduler.client.report [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.758 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.759 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.853 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.854 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.887 185393 INFO nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:22:00 compute-0 nova_compute[185389]: 2026-01-26 17:22:00.915 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.075 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.077 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.078 185393 INFO nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Creating image(s)
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.079 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.080 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.081 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.098 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.148 185393 DEBUG nova.policy [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ac7a648f1b542b193f88ff9b120f211', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63b4132d471f40c4bc46982b5adba0ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.157 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.158 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.158 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.172 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.237 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.240 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.303 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk 1073741824" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.305 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.306 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.386 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.388 185393 DEBUG nova.virt.disk.api [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Checking if we can resize image /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.389 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:01 compute-0 openstack_network_exporter[204387]: ERROR   17:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:22:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:22:01 compute-0 openstack_network_exporter[204387]: ERROR   17:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:22:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.480 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.481 185393 DEBUG nova.virt.disk.api [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Cannot resize image /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.482 185393 DEBUG nova.objects.instance [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lazy-loading 'migration_context' on Instance uuid 3a17d6a2-7bda-406b-a180-049f0e7adc78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.500 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.501 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Ensure instance console log exists: /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.502 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.502 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:01 compute-0 nova_compute[185389]: 2026-01-26 17:22:01.503 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:01.776 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:01.778 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:01.780 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:02 compute-0 nova_compute[185389]: 2026-01-26 17:22:02.507 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Successfully created port: 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.525 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Successfully updated port: 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.542 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.543 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.543 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.819 185393 DEBUG nova.compute.manager [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.820 185393 DEBUG nova.compute.manager [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing instance network info cache due to event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:22:04 compute-0 nova_compute[185389]: 2026-01-26 17:22:04.820 185393 DEBUG oslo_concurrency.lockutils [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:05 compute-0 nova_compute[185389]: 2026-01-26 17:22:05.084 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:22:05 compute-0 nova_compute[185389]: 2026-01-26 17:22:05.212 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:05 compute-0 nova_compute[185389]: 2026-01-26 17:22:05.847 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.196 185393 DEBUG nova.network.neutron [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:07 compute-0 podman[256526]: 2026-01-26 17:22:07.198388355 +0000 UTC m=+0.087151397 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 26 17:22:07 compute-0 podman[256525]: 2026-01-26 17:22:07.199550577 +0000 UTC m=+0.090829767 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Jan 26 17:22:07 compute-0 podman[256527]: 2026-01-26 17:22:07.21147075 +0000 UTC m=+0.096065198 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.239 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.240 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Instance network_info: |[{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.241 185393 DEBUG oslo_concurrency.lockutils [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.241 185393 DEBUG nova.network.neutron [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.245 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Start _get_guest_xml network_info=[{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.252 185393 WARNING nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.258 185393 DEBUG nova.virt.libvirt.host [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.259 185393 DEBUG nova.virt.libvirt.host [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.267 185393 DEBUG nova.virt.libvirt.host [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.268 185393 DEBUG nova.virt.libvirt.host [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.268 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.269 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.269 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.270 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.270 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.270 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.271 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.271 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.271 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.272 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.272 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.272 185393 DEBUG nova.virt.hardware [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.276 185393 DEBUG nova.virt.libvirt.vif [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1027940370',display_name='tempest-AttachInterfacesUnderV243Test-server-1027940370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1027940370',id=10,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA4tlpIA5xPYWg25OrWZf25mgJJUQgHpl0o+5am0huMtCCdzeNB4+BNDx48EvTBsdSFA3wCFEGCW1Btwh4puP8AnxRuaEzCk2E9GsGP0ChphDhSWKC/2GFYoPfdzwRjhw==',key_name='tempest-keypair-818388180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63b4132d471f40c4bc46982b5adba0ec',ramdisk_id='',reservation_id='r-zpu0djkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-286995791',owner_user_name='tempest-AttachInterfacesUnderV243Test-286995791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:22:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ac7a648f1b542b193f88ff9b120f211',uuid=3a17d6a2-7bda-406b-a180-049f0e7adc78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.276 185393 DEBUG nova.network.os_vif_util [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converting VIF {"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.277 185393 DEBUG nova.network.os_vif_util [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.278 185393 DEBUG nova.objects.instance [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a17d6a2-7bda-406b-a180-049f0e7adc78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.313 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <uuid>3a17d6a2-7bda-406b-a180-049f0e7adc78</uuid>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <name>instance-0000000a</name>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1027940370</nova:name>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:22:07</nova:creationTime>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:user uuid="0ac7a648f1b542b193f88ff9b120f211">tempest-AttachInterfacesUnderV243Test-286995791-project-member</nova:user>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:project uuid="63b4132d471f40c4bc46982b5adba0ec">tempest-AttachInterfacesUnderV243Test-286995791</nova:project>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         <nova:port uuid="244cc784-cc22-4baa-ae9b-a9648a2a11b8">
Jan 26 17:22:07 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <system>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="serial">3a17d6a2-7bda-406b-a180-049f0e7adc78</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="uuid">3a17d6a2-7bda-406b-a180-049f0e7adc78</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </system>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <os>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </os>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <features>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </features>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.config"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:95:38:b5"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <target dev="tap244cc784-cc"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/console.log" append="off"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <video>
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </video>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:22:07 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:22:07 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:22:07 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:22:07 compute-0 nova_compute[185389]: </domain>
Jan 26 17:22:07 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.315 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Preparing to wait for external event network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.315 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.316 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.316 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.317 185393 DEBUG nova.virt.libvirt.vif [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:21:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1027940370',display_name='tempest-AttachInterfacesUnderV243Test-server-1027940370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1027940370',id=10,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA4tlpIA5xPYWg25OrWZf25mgJJUQgHpl0o+5am0huMtCCdzeNB4+BNDx48EvTBsdSFA3wCFEGCW1Btwh4puP8AnxRuaEzCk2E9GsGP0ChphDhSWKC/2GFYoPfdzwRjhw==',key_name='tempest-keypair-818388180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63b4132d471f40c4bc46982b5adba0ec',ramdisk_id='',reservation_id='r-zpu0djkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-286995791',owner_user_name='tempest-AttachInterfacesUnderV243Test-286995791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:22:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ac7a648f1b542b193f88ff9b120f211',uuid=3a17d6a2-7bda-406b-a180-049f0e7adc78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.318 185393 DEBUG nova.network.os_vif_util [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converting VIF {"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.318 185393 DEBUG nova.network.os_vif_util [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.319 185393 DEBUG os_vif [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.320 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.320 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.321 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.325 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap244cc784-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.326 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap244cc784-cc, col_values=(('external_ids', {'iface-id': '244cc784-cc22-4baa-ae9b-a9648a2a11b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:38:b5', 'vm-uuid': '3a17d6a2-7bda-406b-a180-049f0e7adc78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.327 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:07 compute-0 NetworkManager[56253]: <info>  [1769448127.3291] manager: (tap244cc784-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.329 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.337 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.338 185393 INFO os_vif [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc')
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.502 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.503 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.504 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] No VIF found with MAC fa:16:3e:95:38:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:22:07 compute-0 nova_compute[185389]: 2026-01-26 17:22:07.505 185393 INFO nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Using config drive
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.106 185393 INFO nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Creating config drive at /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.config
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.113 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnasv3tgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.240 185393 DEBUG oslo_concurrency.processutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnasv3tgb" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:08 compute-0 kernel: tap244cc784-cc: entered promiscuous mode
Jan 26 17:22:08 compute-0 NetworkManager[56253]: <info>  [1769448128.3082] manager: (tap244cc784-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Jan 26 17:22:08 compute-0 ovn_controller[97699]: 2026-01-26T17:22:08Z|00084|binding|INFO|Claiming lport 244cc784-cc22-4baa-ae9b-a9648a2a11b8 for this chassis.
Jan 26 17:22:08 compute-0 ovn_controller[97699]: 2026-01-26T17:22:08Z|00085|binding|INFO|244cc784-cc22-4baa-ae9b-a9648a2a11b8: Claiming fa:16:3e:95:38:b5 10.100.0.3
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.312 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.329 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:08 compute-0 ovn_controller[97699]: 2026-01-26T17:22:08Z|00086|binding|INFO|Setting lport 244cc784-cc22-4baa-ae9b-a9648a2a11b8 ovn-installed in OVS
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.334 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:08 compute-0 systemd-udevd[256603]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:22:08 compute-0 systemd-machined[156679]: New machine qemu-10-instance-0000000a.
Jan 26 17:22:08 compute-0 NetworkManager[56253]: <info>  [1769448128.3622] device (tap244cc784-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:22:08 compute-0 NetworkManager[56253]: <info>  [1769448128.3692] device (tap244cc784-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:22:08 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.442 185393 DEBUG nova.compute.manager [req-55e4abcc-7f65-49ad-9f85-3cc8e0f6b443 req-a878969d-f04f-46b7-a159-b739742dfa5e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Received event network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.443 185393 DEBUG oslo_concurrency.lockutils [req-55e4abcc-7f65-49ad-9f85-3cc8e0f6b443 req-a878969d-f04f-46b7-a159-b739742dfa5e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.443 185393 DEBUG oslo_concurrency.lockutils [req-55e4abcc-7f65-49ad-9f85-3cc8e0f6b443 req-a878969d-f04f-46b7-a159-b739742dfa5e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.444 185393 DEBUG oslo_concurrency.lockutils [req-55e4abcc-7f65-49ad-9f85-3cc8e0f6b443 req-a878969d-f04f-46b7-a159-b739742dfa5e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.444 185393 DEBUG nova.compute.manager [req-55e4abcc-7f65-49ad-9f85-3cc8e0f6b443 req-a878969d-f04f-46b7-a159-b739742dfa5e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Processing event network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.445 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Instance event wait completed in 15 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.452 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.453 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448128.4520128, cecfd5ba-76f1-47f6-8845-36e6c7ed9773 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.454 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] VM Resumed (Lifecycle Event)
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.463 185393 INFO nova.virt.libvirt.driver [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Instance spawned successfully.
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.464 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.732 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:38:b5 10.100.0.3'], port_security=['fa:16:3e:95:38:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3a17d6a2-7bda-406b-a180-049f0e7adc78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2973d9a-cd90-4302-94cd-5d199c633af0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63b4132d471f40c4bc46982b5adba0ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29bb6900-aedc-4398-903a-a870631fd529', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3af21038-3c7d-4aaa-9df8-6451de57b700, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=244cc784-cc22-4baa-ae9b-a9648a2a11b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.733 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 in datapath f2973d9a-cd90-4302-94cd-5d199c633af0 bound to our chassis
Jan 26 17:22:08 compute-0 ovn_controller[97699]: 2026-01-26T17:22:08Z|00087|binding|INFO|Setting lport 244cc784-cc22-4baa-ae9b-a9648a2a11b8 up in Southbound
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.738 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2973d9a-cd90-4302-94cd-5d199c633af0
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.752 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b7b9f5-5879-4dbc-b4e2-6136ea97b876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.754 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf2973d9a-c1 in ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.755 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf2973d9a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.755 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[92bfacd8-533f-467b-b3d9-75ef7720d6b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.757 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[c454b0df-24e5-41e9-bcd7-9d42341bd52e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.770 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.772 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[cca8ca4f-69aa-485d-908b-67375cd82d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.776 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.777 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.777 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.778 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.779 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.779 185393 DEBUG nova.virt.libvirt.driver [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.784 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.801 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[68c18652-3e43-4c44-9b2c-da6a29ba3868]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.832 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[88b751dd-83f4-494c-a6a0-149f82afdf3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.838 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:22:08 compute-0 NetworkManager[56253]: <info>  [1769448128.8487] manager: (tapf2973d9a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.846 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a57b4fe6-8e1e-4bed-b121-0d5353f3a93e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.881 185393 INFO nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Took 33.31 seconds to spawn the instance on the hypervisor.
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.881 185393 DEBUG nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.885 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[50d2dd55-445e-4c01-a77b-835dfdb66155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.891 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[9de8bf18-63b7-4013-9bbe-1662deecbe60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 NetworkManager[56253]: <info>  [1769448128.9153] device (tapf2973d9a-c0): carrier: link connected
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.920 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[619d83a8-13f2-427e-80ce-e3c0ab425410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.940 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[93f47e07-2e8b-4456-a533-fa0d432bce26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2973d9a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:67:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677741, 'reachable_time': 28564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256647, 'error': None, 'target': 'ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.961 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8c97e1-d8aa-4858-b870-d9815ee64693]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:67b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677741, 'tstamp': 677741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256648, 'error': None, 'target': 'ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:08 compute-0 nova_compute[185389]: 2026-01-26 17:22:08.970 185393 INFO nova.compute.manager [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Took 33.82 seconds to build instance.
Jan 26 17:22:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:08.977 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[c97b525c-e3f0-41cb-afb9-86ef439f14b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2973d9a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:67:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677741, 'reachable_time': 28564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256650, 'error': None, 'target': 'ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.006 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448129.004697, 3a17d6a2-7bda-406b-a180-049f0e7adc78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.006 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] VM Started (Lifecycle Event)
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.014 185393 DEBUG oslo_concurrency.lockutils [None req-36c99228-a33d-423b-b495-8ec0e379c6f4 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.028 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6251c1-4055-4ed5-bd60-16b5d21186cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.055 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.062 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448129.0049028, 3a17d6a2-7bda-406b-a180-049f0e7adc78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.063 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] VM Paused (Lifecycle Event)
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.100 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.104 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.105 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd7c108-7eaf-49d8-961a-e34312035395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.107 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2973d9a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.107 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.108 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2973d9a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:09 compute-0 NetworkManager[56253]: <info>  [1769448129.1115] manager: (tapf2973d9a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 26 17:22:09 compute-0 kernel: tapf2973d9a-c0: entered promiscuous mode
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.111 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.114 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.124 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2973d9a-c0, col_values=(('external_ids', {'iface-id': '1a341684-bed3-4740-9502-499c9512f610'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.126 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:09 compute-0 ovn_controller[97699]: 2026-01-26T17:22:09Z|00088|binding|INFO|Releasing lport 1a341684-bed3-4740-9502-499c9512f610 from this chassis (sb_readonly=0)
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.131 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.147 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f2973d9a-cd90-4302-94cd-5d199c633af0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f2973d9a-cd90-4302-94cd-5d199c633af0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.148 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0131094c-223c-47d4-be43-e61745280ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.149 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-f2973d9a-cd90-4302-94cd-5d199c633af0
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/f2973d9a-cd90-4302-94cd-5d199c633af0.pid.haproxy
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID f2973d9a-cd90-4302-94cd-5d199c633af0
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:22:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:09.150 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0', 'env', 'PROCESS_TAG=haproxy-f2973d9a-cd90-4302-94cd-5d199c633af0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f2973d9a-cd90-4302-94cd-5d199c633af0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:22:09 compute-0 podman[256680]: 2026-01-26 17:22:09.614483972 +0000 UTC m=+0.076405135 container create 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:22:09 compute-0 systemd[1]: Started libpod-conmon-7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a.scope.
Jan 26 17:22:09 compute-0 podman[256680]: 2026-01-26 17:22:09.574833576 +0000 UTC m=+0.036754689 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:22:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672e57531aadbbcebb7ff295f2d92d3d0d91ab2c3242d3e80414c8571b0b6ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.698 185393 DEBUG nova.network.neutron [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updated VIF entry in instance network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.699 185393 DEBUG nova.network.neutron [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:09 compute-0 podman[256680]: 2026-01-26 17:22:09.702762418 +0000 UTC m=+0.164683521 container init 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 17:22:09 compute-0 podman[256680]: 2026-01-26 17:22:09.713445208 +0000 UTC m=+0.175366291 container start 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:09 compute-0 nova_compute[185389]: 2026-01-26 17:22:09.734 185393 DEBUG oslo_concurrency.lockutils [req-ad56af89-9333-41d8-b62e-d6d5625af70e req-2567fe23-97ad-4900-93ba-1e9eecdb1a0c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:22:09 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [NOTICE]   (256712) : New worker (256716) forked
Jan 26 17:22:09 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [NOTICE]   (256712) : Loading success.
Jan 26 17:22:09 compute-0 podman[256691]: 2026-01-26 17:22:09.74629343 +0000 UTC m=+0.093383086 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.519 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.520 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.520 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.521 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.522 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.523 185393 INFO nova.compute.manager [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Terminating instance
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.524 185393 DEBUG nova.compute.manager [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:22:10 compute-0 kernel: tap9121ca16-ef (unregistering): left promiscuous mode
Jan 26 17:22:10 compute-0 ovn_controller[97699]: 2026-01-26T17:22:10Z|00089|binding|INFO|Releasing lport 9121ca16-ef95-465a-8d54-65a4d9b6659a from this chassis (sb_readonly=0)
Jan 26 17:22:10 compute-0 ovn_controller[97699]: 2026-01-26T17:22:10Z|00090|binding|INFO|Setting lport 9121ca16-ef95-465a-8d54-65a4d9b6659a down in Southbound
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.569 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 NetworkManager[56253]: <info>  [1769448130.5702] device (tap9121ca16-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:22:10 compute-0 ovn_controller[97699]: 2026-01-26T17:22:10Z|00091|binding|INFO|Removing iface tap9121ca16-ef ovn-installed in OVS
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.574 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.589 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 26 17:22:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 3.543s CPU time.
Jan 26 17:22:10 compute-0 systemd-machined[156679]: Machine qemu-8-instance-00000008 terminated.
Jan 26 17:22:10 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:10.637 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:4e:d9 10.100.0.10'], port_security=['fa:16:3e:98:4e:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'cecfd5ba-76f1-47f6-8845-36e6c7ed9773', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18301e8b436a4fa7ba388e173f305ba9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e989ead-79f9-412e-82f2-4db0d9019b04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7180414-3027-43db-8f29-4631defad8ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=9121ca16-ef95-465a-8d54-65a4d9b6659a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.639 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Received event network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.640 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.640 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:10 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:10.640 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 9121ca16-ef95-465a-8d54-65a4d9b6659a in datapath 11ede1e9-a5f0-4f1a-82c2-9705645b0db8 unbound from our chassis
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.640 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.640 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] No waiting events found dispatching network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.640 185393 WARNING nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Received unexpected event network-vif-plugged-9121ca16-ef95-465a-8d54-65a4d9b6659a for instance with vm_state active and task_state deleting.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.641 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.641 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.642 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Processing event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.643 185393 DEBUG oslo_concurrency.lockutils [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.644 185393 DEBUG nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] No waiting events found dispatching network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.644 185393 WARNING nova.compute.manager [req-2d62076f-7af4-4a95-ada2-5e07de8e7523 req-87c9d453-eeec-4c81-9d59-c7861d8caae4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received unexpected event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 for instance with vm_state building and task_state spawning.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.644 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Instance event wait completed in 18 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:22:10 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:10.645 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11ede1e9-a5f0-4f1a-82c2-9705645b0db8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:22:10 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:10.650 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[077d625d-5c20-434e-bc1a-3d0a3e53f5fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:10 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:10.651 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8 namespace which is not needed anymore
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.669 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448130.649896, 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.669 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] VM Resumed (Lifecycle Event)
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.672 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.682 185393 INFO nova.virt.libvirt.driver [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Instance spawned successfully.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.682 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.759 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.780 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.835 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.835 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.836 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.837 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.837 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.838 185393 DEBUG nova.virt.libvirt.driver [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.841 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.846 185393 INFO nova.virt.libvirt.driver [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Instance destroyed successfully.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.847 185393 DEBUG nova.objects.instance [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lazy-loading 'resources' on Instance uuid cecfd5ba-76f1-47f6-8845-36e6c7ed9773 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.849 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.882 185393 DEBUG nova.virt.libvirt.vif [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-88613890',display_name='tempest-ServerAddressesTestJSON-server-88613890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-88613890',id=8,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:22:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18301e8b436a4fa7ba388e173f305ba9',ramdisk_id='',reservation_id='r-6ngnbhld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-569621097',owner_user_name='tempest-ServerAddressesTestJSON-569621097-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:22:08Z,user_data=None,user_id='be42df6828874d2e90f3dabbd62031cc',uuid=cecfd5ba-76f1-47f6-8845-36e6c7ed9773,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.882 185393 DEBUG nova.network.os_vif_util [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converting VIF {"id": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "address": "fa:16:3e:98:4e:d9", "network": {"id": "11ede1e9-a5f0-4f1a-82c2-9705645b0db8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1102589919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18301e8b436a4fa7ba388e173f305ba9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9121ca16-ef", "ovs_interfaceid": "9121ca16-ef95-465a-8d54-65a4d9b6659a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.883 185393 DEBUG nova.network.os_vif_util [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.884 185393 DEBUG os_vif [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.885 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.886 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9121ca16-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.896 185393 INFO os_vif [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4e:d9,bridge_name='br-int',has_traffic_filtering=True,id=9121ca16-ef95-465a-8d54-65a4d9b6659a,network=Network(11ede1e9-a5f0-4f1a-82c2-9705645b0db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9121ca16-ef')
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.897 185393 INFO nova.virt.libvirt.driver [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Deleting instance files /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773_del
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.898 185393 INFO nova.virt.libvirt.driver [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Deletion of /var/lib/nova/instances/cecfd5ba-76f1-47f6-8845-36e6c7ed9773_del complete
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.959 185393 INFO nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Took 32.28 seconds to spawn the instance on the hypervisor.
Jan 26 17:22:10 compute-0 nova_compute[185389]: 2026-01-26 17:22:10.960 185393 DEBUG nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.052 185393 INFO nova.compute.manager [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Took 33.18 seconds to build instance.
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.055 185393 INFO nova.compute.manager [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Took 0.53 seconds to destroy the instance on the hypervisor.
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.056 185393 DEBUG oslo.service.loopingcall [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.057 185393 DEBUG nova.compute.manager [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.057 185393 DEBUG nova.network.neutron [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.076 185393 DEBUG oslo_concurrency.lockutils [None req-4cb58eef-b3e8-4aa5-8b38-6899784019cd 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [NOTICE]   (256401) : haproxy version is 2.8.14-c23fe91
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [NOTICE]   (256401) : path to executable is /usr/sbin/haproxy
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [WARNING]  (256401) : Exiting Master process...
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [WARNING]  (256401) : Exiting Master process...
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [ALERT]    (256401) : Current worker (256403) exited with code 143 (Terminated)
Jan 26 17:22:11 compute-0 neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8[256397]: [WARNING]  (256401) : All workers exited. Exiting... (0)
Jan 26 17:22:11 compute-0 systemd[1]: libpod-261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4.scope: Deactivated successfully.
Jan 26 17:22:11 compute-0 podman[256755]: 2026-01-26 17:22:11.219151392 +0000 UTC m=+0.430887788 container died 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4-userdata-shm.mount: Deactivated successfully.
Jan 26 17:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-375377a2b5b448accdea7fe4c5d3e2ad60d231cd84af3ff1d661b05e7f52f37b-merged.mount: Deactivated successfully.
Jan 26 17:22:11 compute-0 podman[256783]: 2026-01-26 17:22:11.504305262 +0000 UTC m=+0.262882416 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:22:11 compute-0 podman[256755]: 2026-01-26 17:22:11.540758932 +0000 UTC m=+0.752495308 container cleanup 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:22:11 compute-0 systemd[1]: libpod-conmon-261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4.scope: Deactivated successfully.
Jan 26 17:22:11 compute-0 podman[256815]: 2026-01-26 17:22:11.75138062 +0000 UTC m=+0.179894305 container remove 261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.772 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[122371e6-905b-49ba-8bb1-94a995c0e197]: (4, ('Mon Jan 26 05:22:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8 (261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4)\n261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4\nMon Jan 26 05:22:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8 (261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4)\n261ddc93e38e239a3727554ea5b0784938a4fa5fa9a4ed63ba47334ef70ff5e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.775 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[eb866110-1b6d-44db-a8fb-fe2287f3cdd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.777 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11ede1e9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:11 compute-0 kernel: tap11ede1e9-a0: left promiscuous mode
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.781 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.789 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f59a25fa-0824-4b1e-baf2-0502ebf438c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.814 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.814 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcd2644-4d05-4ea9-821d-e8914527fc9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.819 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[27d788cb-7375-471a-81f6-55c5884003ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.844 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3aa60e-3cfd-4ee4-8744-506aee786f74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675990, 'reachable_time': 28420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256829, 'error': None, 'target': 'ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.847 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-11ede1e9-a5f0-4f1a-82c2-9705645b0db8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:22:11 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:11.847 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[a142da97-4c53-4c80-ba88-cd9805f87608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d11ede1e9\x2da5f0\x2d4f1a\x2d82c2\x2d9705645b0db8.mount: Deactivated successfully.
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.868 185393 DEBUG nova.compute.manager [req-93e8f41f-e1d9-4b04-9286-4fc7e7d14fe4 req-6c291edf-292b-43dd-9b90-493a39c829ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.869 185393 DEBUG oslo_concurrency.lockutils [req-93e8f41f-e1d9-4b04-9286-4fc7e7d14fe4 req-6c291edf-292b-43dd-9b90-493a39c829ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.869 185393 DEBUG oslo_concurrency.lockutils [req-93e8f41f-e1d9-4b04-9286-4fc7e7d14fe4 req-6c291edf-292b-43dd-9b90-493a39c829ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.869 185393 DEBUG oslo_concurrency.lockutils [req-93e8f41f-e1d9-4b04-9286-4fc7e7d14fe4 req-6c291edf-292b-43dd-9b90-493a39c829ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.870 185393 DEBUG nova.compute.manager [req-93e8f41f-e1d9-4b04-9286-4fc7e7d14fe4 req-6c291edf-292b-43dd-9b90-493a39c829ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Processing event network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.871 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.879 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448131.8787935, 3a17d6a2-7bda-406b-a180-049f0e7adc78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.879 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] VM Resumed (Lifecycle Event)
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.883 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.888 185393 INFO nova.virt.libvirt.driver [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Instance spawned successfully.
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.888 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.912 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.919 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.922 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.923 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.923 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.924 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.924 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.925 185393 DEBUG nova.virt.libvirt.driver [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:22:11 compute-0 nova_compute[185389]: 2026-01-26 17:22:11.962 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.000 185393 DEBUG nova.network.neutron [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.008 185393 INFO nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Took 10.93 seconds to spawn the instance on the hypervisor.
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.008 185393 DEBUG nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.021 185393 INFO nova.compute.manager [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Took 0.96 seconds to deallocate network for instance.
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.143 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.143 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.302 185393 INFO nova.compute.manager [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Took 12.19 seconds to build instance.
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.315 185393 DEBUG nova.compute.provider_tree [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.455 185393 DEBUG nova.scheduler.client.report [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.506 185393 DEBUG oslo_concurrency.lockutils [None req-cd59ab6f-60ff-4488-af9f-2e6879471225 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.878 185393 DEBUG nova.compute.manager [req-4b8861f0-db4f-43a5-b26e-ba3fd06a5a6c req-0b545aa2-ad28-4d63-8da3-904080902346 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Received event network-vif-deleted-9121ca16-ef95-465a-8d54-65a4d9b6659a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.883 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:12 compute-0 nova_compute[185389]: 2026-01-26 17:22:12.921 185393 INFO nova.scheduler.client.report [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Deleted allocations for instance cecfd5ba-76f1-47f6-8845-36e6c7ed9773
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.027 185393 DEBUG oslo_concurrency.lockutils [None req-ad5bd65d-7c9c-4f71-88e8-b4c712f2dcd9 be42df6828874d2e90f3dabbd62031cc 18301e8b436a4fa7ba388e173f305ba9 - - default default] Lock "cecfd5ba-76f1-47f6-8845-36e6c7ed9773" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.203 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.204 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.204 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.205 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:13 compute-0 nova_compute[185389]: 2026-01-26 17:22:13.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.015 185393 DEBUG nova.compute.manager [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.016 185393 DEBUG oslo_concurrency.lockutils [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.018 185393 DEBUG oslo_concurrency.lockutils [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.018 185393 DEBUG oslo_concurrency.lockutils [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.019 185393 DEBUG nova.compute.manager [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] No waiting events found dispatching network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:22:14 compute-0 nova_compute[185389]: 2026-01-26 17:22:14.020 185393 WARNING nova.compute.manager [req-0222b11b-d230-4988-bf92-731aa3b0d8d8 req-d30d9ef0-9faf-4404-b71d-f0c1bc4e576a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received unexpected event network-vif-plugged-244cc784-cc22-4baa-ae9b-a9648a2a11b8 for instance with vm_state active and task_state None.
Jan 26 17:22:14 compute-0 podman[256836]: 2026-01-26 17:22:14.802930316 +0000 UTC m=+0.109322548 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 26 17:22:14 compute-0 podman[256835]: 2026-01-26 17:22:14.806369899 +0000 UTC m=+0.119532456 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:22:14 compute-0 podman[256834]: 2026-01-26 17:22:14.822016994 +0000 UTC m=+0.136769353 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.034 185393 DEBUG nova.compute.manager [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-changed-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.035 185393 DEBUG nova.compute.manager [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Refreshing instance network info cache due to event network-changed-86c33312-6904-4dd4-9a95-7fd318980439. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.036 185393 DEBUG oslo_concurrency.lockutils [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.036 185393 DEBUG oslo_concurrency.lockutils [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.036 185393 DEBUG nova.network.neutron [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Refreshing network info cache for port 86c33312-6904-4dd4-9a95-7fd318980439 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.852 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:15 compute-0 nova_compute[185389]: 2026-01-26 17:22:15.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.821 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.823 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.824 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.824 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.825 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.827 185393 INFO nova.compute.manager [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Terminating instance
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.828 185393 DEBUG nova.compute.manager [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:22:16 compute-0 kernel: tap86c33312-69 (unregistering): left promiscuous mode
Jan 26 17:22:16 compute-0 NetworkManager[56253]: <info>  [1769448136.8846] device (tap86c33312-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:22:16 compute-0 ovn_controller[97699]: 2026-01-26T17:22:16Z|00092|binding|INFO|Releasing lport 86c33312-6904-4dd4-9a95-7fd318980439 from this chassis (sb_readonly=0)
Jan 26 17:22:16 compute-0 ovn_controller[97699]: 2026-01-26T17:22:16Z|00093|binding|INFO|Setting lport 86c33312-6904-4dd4-9a95-7fd318980439 down in Southbound
Jan 26 17:22:16 compute-0 ovn_controller[97699]: 2026-01-26T17:22:16Z|00094|binding|INFO|Removing iface tap86c33312-69 ovn-installed in OVS
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.901 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.905 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:16 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:16.909 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:0a:7b 10.100.0.12'], port_security=['fa:16:3e:7d:0a:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8c28c24a-cab4-43b3-b9ee-4ce40d092c71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92052205-69bb-42de-8996-b5b0b55d3221', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '854cc1d25bbe4358a1a0687611af792e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adc62df8-ace5-4031-9f17-e384cbe29eb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3aab8cb-6a21-458a-ad7d-d889e3560e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=86c33312-6904-4dd4-9a95-7fd318980439) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:22:16 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:16.910 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 86c33312-6904-4dd4-9a95-7fd318980439 in datapath 92052205-69bb-42de-8996-b5b0b55d3221 unbound from our chassis
Jan 26 17:22:16 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:16.911 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92052205-69bb-42de-8996-b5b0b55d3221, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:22:16 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:16.915 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[18af0b64-0b44-49f0-b488-579c560ae106]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:16 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:16.916 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221 namespace which is not needed anymore
Jan 26 17:22:16 compute-0 nova_compute[185389]: 2026-01-26 17:22:16.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:16 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 26 17:22:16 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 6.938s CPU time.
Jan 26 17:22:16 compute-0 systemd-machined[156679]: Machine qemu-9-instance-00000009 terminated.
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.010 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.033 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.034 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.035 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.035 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.035 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.036 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.036 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.061 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.072 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.117 185393 INFO nova.virt.libvirt.driver [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Instance destroyed successfully.
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.118 185393 DEBUG nova.objects.instance [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lazy-loading 'resources' on Instance uuid 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.132 185393 DEBUG nova.virt.libvirt.vif [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-692866047',display_name='tempest-ServersTestManualDisk-server-692866047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-692866047',id=9,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCugjq2hLog8NxEN+V2U0sUpwXrrxhpFq5XCQG80oprZO9bLQcp2/aL0kKNeggZCa078aw+uAob0EH1cHywfjLqiOV4FpNB+Sqw44BwE3DbBn/9eOg+iYYMdGk07/+QebQ==',key_name='tempest-keypair-1760181719',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:22:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='854cc1d25bbe4358a1a0687611af792e',ramdisk_id='',reservation_id='r-3n75zltf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-26592744',owner_user_name='tempest-ServersTestManualDisk-26592744-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:22:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06957310edd64b7e95b237aa77f5311d',uuid=8c28c24a-cab4-43b3-b9ee-4ce40d092c71,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.132 185393 DEBUG nova.network.os_vif_util [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converting VIF {"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.133 185393 DEBUG nova.network.os_vif_util [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.133 185393 DEBUG os_vif [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.134 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.134 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86c33312-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.136 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.139 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.142 185393 INFO os_vif [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:0a:7b,bridge_name='br-int',has_traffic_filtering=True,id=86c33312-6904-4dd4-9a95-7fd318980439,network=Network(92052205-69bb-42de-8996-b5b0b55d3221),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86c33312-69')
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.143 185393 INFO nova.virt.libvirt.driver [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Deleting instance files /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71_del
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.143 185393 INFO nova.virt.libvirt.driver [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Deletion of /var/lib/nova/instances/8c28c24a-cab4-43b3-b9ee-4ce40d092c71_del complete
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.207 185393 INFO nova.compute.manager [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Took 0.38 seconds to destroy the instance on the hypervisor.
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.207 185393 DEBUG oslo.service.loopingcall [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.207 185393 DEBUG nova.compute.manager [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.208 185393 DEBUG nova.network.neutron [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [NOTICE]   (256499) : haproxy version is 2.8.14-c23fe91
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [NOTICE]   (256499) : path to executable is /usr/sbin/haproxy
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [WARNING]  (256499) : Exiting Master process...
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [WARNING]  (256499) : Exiting Master process...
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [ALERT]    (256499) : Current worker (256501) exited with code 143 (Terminated)
Jan 26 17:22:17 compute-0 neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221[256495]: [WARNING]  (256499) : All workers exited. Exiting... (0)
Jan 26 17:22:17 compute-0 systemd[1]: libpod-8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8.scope: Deactivated successfully.
Jan 26 17:22:17 compute-0 podman[256918]: 2026-01-26 17:22:17.473627173 +0000 UTC m=+0.413068073 container died 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.598 185393 DEBUG nova.network.neutron [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updated VIF entry in instance network info cache for port 86c33312-6904-4dd4-9a95-7fd318980439. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.598 185393 DEBUG nova.network.neutron [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updating instance_info_cache with network_info: [{"id": "86c33312-6904-4dd4-9a95-7fd318980439", "address": "fa:16:3e:7d:0a:7b", "network": {"id": "92052205-69bb-42de-8996-b5b0b55d3221", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-626505889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "854cc1d25bbe4358a1a0687611af792e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86c33312-69", "ovs_interfaceid": "86c33312-6904-4dd4-9a95-7fd318980439", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.622 185393 DEBUG oslo_concurrency.lockutils [req-574c394d-3286-490f-a579-cab44daee0c2 req-9806b936-ec10-4d94-9786-7e7a89396699 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-8c28c24a-cab4-43b3-b9ee-4ce40d092c71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8-userdata-shm.mount: Deactivated successfully.
Jan 26 17:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6357b9af614135871e09f5e67e46dbdfa598f60a9ee0e4d9100e7dbb9be49d0-merged.mount: Deactivated successfully.
Jan 26 17:22:17 compute-0 podman[256918]: 2026-01-26 17:22:17.739697925 +0000 UTC m=+0.679138825 container cleanup 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 17:22:17 compute-0 systemd[1]: libpod-conmon-8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8.scope: Deactivated successfully.
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.820 185393 DEBUG nova.compute.manager [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-unplugged-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.820 185393 DEBUG oslo_concurrency.lockutils [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.820 185393 DEBUG oslo_concurrency.lockutils [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.820 185393 DEBUG oslo_concurrency.lockutils [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.821 185393 DEBUG nova.compute.manager [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] No waiting events found dispatching network-vif-unplugged-86c33312-6904-4dd4-9a95-7fd318980439 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:22:17 compute-0 nova_compute[185389]: 2026-01-26 17:22:17.821 185393 DEBUG nova.compute.manager [req-093cef6c-cba8-47b1-8d85-630ea64e6474 req-7b499eb1-3e4c-4279-bf83-78abd0f87aed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-unplugged-86c33312-6904-4dd4-9a95-7fd318980439 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:22:18 compute-0 podman[256962]: 2026-01-26 17:22:18.203751453 +0000 UTC m=+0.434969109 container remove 8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.212 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cc32dcf2-942a-4cfc-b2d9-42344b6badfa]: (4, ('Mon Jan 26 05:22:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221 (8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8)\n8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8\nMon Jan 26 05:22:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221 (8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8)\n8e5f659aaac5a617315849b78129b4e17db44a2fe6ccec2a37fc6ab9e8944fc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.216 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9f64e5d6-ff84-49c2-92b7-d31588158e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.218 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92052205-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:18 compute-0 nova_compute[185389]: 2026-01-26 17:22:18.220 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:18 compute-0 kernel: tap92052205-60: left promiscuous mode
Jan 26 17:22:18 compute-0 nova_compute[185389]: 2026-01-26 17:22:18.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:18 compute-0 nova_compute[185389]: 2026-01-26 17:22:18.239 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.242 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9522ee57-b18e-45b4-9e1f-628ffc30dd60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.257 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cca4c7e9-bbab-459a-8931-15b837fcb348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.260 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9d46ac98-ca09-447b-ab81-ad869d86eb40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.290 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0d0033-3527-4006-8066-982cb80ce33a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676106, 'reachable_time': 22958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256979, 'error': None, 'target': 'ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d92052205\x2d69bb\x2d42de\x2d8996\x2db5b0b55d3221.mount: Deactivated successfully.
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.299 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92052205-69bb-42de-8996-b5b0b55d3221 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:22:18 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:18.299 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[240d9a69-2145-4d03-a5be-1d0b2bbb8cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.951 185393 DEBUG nova.compute.manager [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.951 185393 DEBUG oslo_concurrency.lockutils [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.951 185393 DEBUG oslo_concurrency.lockutils [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.952 185393 DEBUG oslo_concurrency.lockutils [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.952 185393 DEBUG nova.compute.manager [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] No waiting events found dispatching network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:22:19 compute-0 nova_compute[185389]: 2026-01-26 17:22:19.953 185393 WARNING nova.compute.manager [req-17ae66b4-a833-4c03-8fcd-c6b90f8f6c6c req-46a36e44-adff-4ccf-aaaa-641c422578d5 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received unexpected event network-vif-plugged-86c33312-6904-4dd4-9a95-7fd318980439 for instance with vm_state active and task_state deleting.
Jan 26 17:22:20 compute-0 ovn_controller[97699]: 2026-01-26T17:22:20Z|00095|binding|INFO|Releasing lport 1a341684-bed3-4740-9502-499c9512f610 from this chassis (sb_readonly=0)
Jan 26 17:22:20 compute-0 ovn_controller[97699]: 2026-01-26T17:22:20Z|00096|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:22:20 compute-0 nova_compute[185389]: 2026-01-26 17:22:20.627 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:20 compute-0 nova_compute[185389]: 2026-01-26 17:22:20.855 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.276 185393 DEBUG nova.network.neutron [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.292 185393 INFO nova.compute.manager [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Took 4.08 seconds to deallocate network for instance.
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.339 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.339 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.463 185393 DEBUG nova.compute.provider_tree [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.480 185393 DEBUG nova.scheduler.client.report [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.500 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.520 185393 INFO nova.scheduler.client.report [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Deleted allocations for instance 8c28c24a-cab4-43b3-b9ee-4ce40d092c71
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.576 185393 DEBUG oslo_concurrency.lockutils [None req-48c6b89e-603e-4e3d-a455-1133d068f025 06957310edd64b7e95b237aa77f5311d 854cc1d25bbe4358a1a0687611af792e - - default default] Lock "8c28c24a-cab4-43b3-b9ee-4ce40d092c71" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.749 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.750 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.750 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.750 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.849 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.924 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.925 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:21 compute-0 nova_compute[185389]: 2026-01-26 17:22:21.998 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.007 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.075 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.076 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.135 185393 DEBUG nova.compute.manager [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.136 185393 DEBUG nova.compute.manager [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing instance network info cache due to event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.137 185393 DEBUG oslo_concurrency.lockutils [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.137 185393 DEBUG oslo_concurrency.lockutils [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.138 185393 DEBUG nova.network.neutron [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.140 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.159 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.658 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.659 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4981MB free_disk=72.37714004516602GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.659 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.660 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.725 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 186e87cb-beb9-48df-8b10-dfc5c8afe996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.725 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 3a17d6a2-7bda-406b-a180-049f0e7adc78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.726 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.726 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.826 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.843 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.862 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:22:22 compute-0 nova_compute[185389]: 2026-01-26 17:22:22.863 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:24 compute-0 ovn_controller[97699]: 2026-01-26T17:22:24Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:ea:64 10.100.0.5
Jan 26 17:22:24 compute-0 ovn_controller[97699]: 2026-01-26T17:22:24Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:ea:64 10.100.0.5
Jan 26 17:22:25 compute-0 nova_compute[185389]: 2026-01-26 17:22:25.842 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448130.832833, cecfd5ba-76f1-47f6-8845-36e6c7ed9773 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:25 compute-0 nova_compute[185389]: 2026-01-26 17:22:25.843 185393 INFO nova.compute.manager [-] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] VM Stopped (Lifecycle Event)
Jan 26 17:22:25 compute-0 nova_compute[185389]: 2026-01-26 17:22:25.860 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:25 compute-0 nova_compute[185389]: 2026-01-26 17:22:25.868 185393 DEBUG nova.compute.manager [None req-f5b01db1-d9f5-465d-a879-9c6f60dfa344 - - - - - -] [instance: cecfd5ba-76f1-47f6-8845-36e6c7ed9773] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:26 compute-0 nova_compute[185389]: 2026-01-26 17:22:26.038 185393 DEBUG nova.network.neutron [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updated VIF entry in instance network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:22:26 compute-0 nova_compute[185389]: 2026-01-26 17:22:26.039 185393 DEBUG nova.network.neutron [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:22:26 compute-0 nova_compute[185389]: 2026-01-26 17:22:26.059 185393 DEBUG oslo_concurrency.lockutils [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:22:26 compute-0 nova_compute[185389]: 2026-01-26 17:22:26.060 185393 DEBUG nova.compute.manager [req-e39784cc-84a2-488c-93ee-d40ef1ba40d1 req-faf76bdd-778e-41eb-911f-7963d4cc942f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Received event network-vif-deleted-86c33312-6904-4dd4-9a95-7fd318980439 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:26 compute-0 nova_compute[185389]: 2026-01-26 17:22:26.739 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:27 compute-0 nova_compute[185389]: 2026-01-26 17:22:27.143 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:27 compute-0 nova_compute[185389]: 2026-01-26 17:22:27.858 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:29 compute-0 podman[201244]: time="2026-01-26T17:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:22:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:22:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Jan 26 17:22:30 compute-0 nova_compute[185389]: 2026-01-26 17:22:30.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:30 compute-0 nova_compute[185389]: 2026-01-26 17:22:30.864 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:31 compute-0 ovn_controller[97699]: 2026-01-26T17:22:31Z|00097|binding|INFO|Releasing lport 1a341684-bed3-4740-9502-499c9512f610 from this chassis (sb_readonly=0)
Jan 26 17:22:31 compute-0 ovn_controller[97699]: 2026-01-26T17:22:31Z|00098|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:22:31 compute-0 nova_compute[185389]: 2026-01-26 17:22:31.327 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.356 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.357 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.363 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3a17d6a2-7bda-406b-a180-049f0e7adc78 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:22:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:31.364 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3a17d6a2-7bda-406b-a180-049f0e7adc78 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:22:31 compute-0 openstack_network_exporter[204387]: ERROR   17:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:22:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:22:31 compute-0 openstack_network_exporter[204387]: ERROR   17:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:22:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:22:32 compute-0 nova_compute[185389]: 2026-01-26 17:22:32.112 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448137.110831, 8c28c24a-cab4-43b3-b9ee-4ce40d092c71 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:22:32 compute-0 nova_compute[185389]: 2026-01-26 17:22:32.112 185393 INFO nova.compute.manager [-] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] VM Stopped (Lifecycle Event)
Jan 26 17:22:32 compute-0 nova_compute[185389]: 2026-01-26 17:22:32.137 185393 DEBUG nova.compute.manager [None req-c98cb557-68e1-4876-a013-8b805aea8abc - - - - - -] [instance: 8c28c24a-cab4-43b3-b9ee-4ce40d092c71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:22:32 compute-0 nova_compute[185389]: 2026-01-26 17:22:32.145 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:32 compute-0 nova_compute[185389]: 2026-01-26 17:22:32.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.719 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Mon, 26 Jan 2026 17:22:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-538d745e-1662-42cc-a09d-3c6392a1e408 x-openstack-request-id: req-538d745e-1662-42cc-a09d-3c6392a1e408 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.719 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3a17d6a2-7bda-406b-a180-049f0e7adc78", "name": "tempest-AttachInterfacesUnderV243Test-server-1027940370", "status": "ACTIVE", "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "user_id": "0ac7a648f1b542b193f88ff9b120f211", "metadata": {}, "hostId": "a8348163e5340508155ca5d5e5f8c1b9f0ca789f785d6866084545eb", "image": {"id": "90acf026-cf3a-409a-999e-35d89bb9a6bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/90acf026-cf3a-409a-999e-35d89bb9a6bf"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:21:58Z", "updated": "2026-01-26T17:22:12Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1831496438-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:95:38:b5"}, {"version": 4, "addr": "192.168.122.200", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:95:38:b5"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3a17d6a2-7bda-406b-a180-049f0e7adc78"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3a17d6a2-7bda-406b-a180-049f0e7adc78"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-818388180", "OS-SRV-USG:launched_at": "2026-01-26T17:22:12.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--2078546837"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.719 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3a17d6a2-7bda-406b-a180-049f0e7adc78 used request id req-538d745e-1662-42cc-a09d-3c6392a1e408 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.721 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a17d6a2-7bda-406b-a180-049f0e7adc78', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1027940370', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63b4132d471f40c4bc46982b5adba0ec', 'user_id': '0ac7a648f1b542b193f88ff9b120f211', 'hostId': 'a8348163e5340508155ca5d5e5f8c1b9f0ca789f785d6866084545eb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.724 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 186e87cb-beb9-48df-8b10-dfc5c8afe996 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:22:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:32.725 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/186e87cb-beb9-48df-8b10-dfc5c8afe996 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:22:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:33.900 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:22:33 compute-0 nova_compute[185389]: 2026-01-26 17:22:33.901 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:33 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:33.904 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.271 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Mon, 26 Jan 2026 17:22:32 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5d4f95b4-1e50-4d79-8bb6-d754adb1be05 x-openstack-request-id: req-5d4f95b4-1e50-4d79-8bb6-d754adb1be05 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.272 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "186e87cb-beb9-48df-8b10-dfc5c8afe996", "name": "tempest-ServerActionsTestJSON-server-34810632", "status": "ACTIVE", "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "user_id": "6acd3be55c754b3dbf8ef6c0922b18ae", "metadata": {}, "hostId": "18f57cfccecf2bdf7d53eb65c1eb28a6f43c93a36ca96f8beec3f1d9", "image": {"id": "90acf026-cf3a-409a-999e-35d89bb9a6bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/90acf026-cf3a-409a-999e-35d89bb9a6bf"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:21:31Z", "updated": "2026-01-26T17:21:51Z", "addresses": {"tempest-ServerActionsTestJSON-1598418847-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b3:ea:64"}, {"version": 4, "addr": "192.168.122.201", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b3:ea:64"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/186e87cb-beb9-48df-8b10-dfc5c8afe996"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/186e87cb-beb9-48df-8b10-dfc5c8afe996"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-288037080", "OS-SRV-USG:launched_at": "2026-01-26T17:21:51.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1828152097"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.272 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/186e87cb-beb9-48df-8b10-dfc5c8afe996 used request id req-5d4f95b4-1e50-4d79-8bb6-d754adb1be05 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.273 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '186e87cb-beb9-48df-8b10-dfc5c8afe996', 'name': 'tempest-ServerActionsTestJSON-server-34810632', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'user_id': '6acd3be55c754b3dbf8ef6c0922b18ae', 'hostId': '18f57cfccecf2bdf7d53eb65c1eb28a6f43c93a36ca96f8beec3f1d9', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:22:34.274814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.323 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.324 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.382 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.bytes volume: 72916992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.382 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.385 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:22:34.384205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.385 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.385 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.latency volume: 4038939860 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.385 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.386 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:22:34.387019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.387 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.387 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.388 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.requests volume: 296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.388 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:22:34.389561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.399 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3a17d6a2-7bda-406b-a180-049f0e7adc78 / tap244cc784-cc inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.400 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.405 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 186e87cb-beb9-48df-8b10-dfc5c8afe996 / tap6e11a3e1-dc inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.406 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T17:22:34.409793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.410 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.411 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1027940370>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-34810632>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1027940370>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-34810632>]
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.412 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:22:34.413300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.443 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/cpu volume: 22170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.472 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/cpu volume: 33090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.475 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.476 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:22:34.474540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:22:34.478977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.480 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.482 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:22:34.481475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.482 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:22:34.484564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.485 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.485 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.486 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:22:34.487735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.490 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.491 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:22:34.490160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.492 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.493 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:22:34.492337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.493 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.495 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T17:22:34.495221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.496 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1027940370>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-34810632>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1027940370>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-34810632>]
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.496 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.498 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:22:34.497693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.498 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.500 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:22:34.500883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.501 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.501 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 3a17d6a2-7bda-406b-a180-049f0e7adc78: ceilometer.compute.pollsters.NoVolumeException
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.502 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/memory.usage volume: 42.7890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.504 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:22:34.504100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.505 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.507 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:22:34.507246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.508 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.511 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:22:34.510464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.511 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.512 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:22:34.512972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.527 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.528 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.551 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.552 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.554 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.555 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:22:34.553380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.555 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.556 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:22:34.558428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.559 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.559 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.560 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.bytes volume: 31025664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.560 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.562 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:22:34.563212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.564 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.latency volume: 404568516 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.564 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.latency volume: 594566 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.565 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.latency volume: 602872387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.565 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.latency volume: 55976508 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.566 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:22:34.567890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.568 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.569 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:22:34.571598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.572 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.573 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.573 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.573 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:22:34.575460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.576 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.576 14 DEBUG ceilometer.compute.pollsters [-] 3a17d6a2-7bda-406b-a180-049f0e7adc78/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.576 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.577 14 DEBUG ceilometer.compute.pollsters [-] 186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:22:34.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:22:35 compute-0 nova_compute[185389]: 2026-01-26 17:22:35.868 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:37 compute-0 nova_compute[185389]: 2026-01-26 17:22:37.147 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:38 compute-0 podman[257009]: 2026-01-26 17:22:38.232889018 +0000 UTC m=+0.087437346 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter)
Jan 26 17:22:38 compute-0 podman[257011]: 2026-01-26 17:22:38.257278909 +0000 UTC m=+0.111884858 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:22:38 compute-0 podman[257010]: 2026-01-26 17:22:38.258736619 +0000 UTC m=+0.111651182 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:22:40 compute-0 podman[257070]: 2026-01-26 17:22:40.184276579 +0000 UTC m=+0.077184416 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:22:40 compute-0 nova_compute[185389]: 2026-01-26 17:22:40.870 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:42 compute-0 nova_compute[185389]: 2026-01-26 17:22:42.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:42 compute-0 podman[257093]: 2026-01-26 17:22:42.196503163 +0000 UTC m=+0.076174040 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:22:42 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:22:42.907 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:22:45 compute-0 podman[257112]: 2026-01-26 17:22:45.202272075 +0000 UTC m=+0.087641559 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:22:45 compute-0 podman[257113]: 2026-01-26 17:22:45.21531811 +0000 UTC m=+0.099261715 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64)
Jan 26 17:22:45 compute-0 podman[257111]: 2026-01-26 17:22:45.259544601 +0000 UTC m=+0.152024528 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:22:45 compute-0 nova_compute[185389]: 2026-01-26 17:22:45.873 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:47 compute-0 nova_compute[185389]: 2026-01-26 17:22:47.153 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:47 compute-0 ovn_controller[97699]: 2026-01-26T17:22:47Z|00099|binding|INFO|Releasing lport 1a341684-bed3-4740-9502-499c9512f610 from this chassis (sb_readonly=0)
Jan 26 17:22:47 compute-0 ovn_controller[97699]: 2026-01-26T17:22:47Z|00100|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:22:47 compute-0 ovn_controller[97699]: 2026-01-26T17:22:47Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:38:b5 10.100.0.3
Jan 26 17:22:47 compute-0 ovn_controller[97699]: 2026-01-26T17:22:47Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:38:b5 10.100.0.3
Jan 26 17:22:47 compute-0 nova_compute[185389]: 2026-01-26 17:22:47.923 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:50 compute-0 nova_compute[185389]: 2026-01-26 17:22:50.875 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:52 compute-0 nova_compute[185389]: 2026-01-26 17:22:52.156 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:52 compute-0 nova_compute[185389]: 2026-01-26 17:22:52.222 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:54 compute-0 sshd-session[257186]: Invalid user ubuntu from 103.42.57.158 port 47938
Jan 26 17:22:54 compute-0 sshd-session[257186]: Received disconnect from 103.42.57.158 port 47938:11:  [preauth]
Jan 26 17:22:54 compute-0 sshd-session[257186]: Disconnected from invalid user ubuntu 103.42.57.158 port 47938 [preauth]
Jan 26 17:22:55 compute-0 nova_compute[185389]: 2026-01-26 17:22:55.736 185393 DEBUG nova.objects.instance [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lazy-loading 'flavor' on Instance uuid 3a17d6a2-7bda-406b-a180-049f0e7adc78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:22:55 compute-0 nova_compute[185389]: 2026-01-26 17:22:55.779 185393 DEBUG oslo_concurrency.lockutils [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:55 compute-0 nova_compute[185389]: 2026-01-26 17:22:55.779 185393 DEBUG oslo_concurrency.lockutils [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:22:55 compute-0 nova_compute[185389]: 2026-01-26 17:22:55.878 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:57 compute-0 nova_compute[185389]: 2026-01-26 17:22:57.159 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:58 compute-0 nova_compute[185389]: 2026-01-26 17:22:58.808 185393 DEBUG nova.network.neutron [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:22:59 compute-0 nova_compute[185389]: 2026-01-26 17:22:59.029 185393 DEBUG nova.compute.manager [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:22:59 compute-0 nova_compute[185389]: 2026-01-26 17:22:59.030 185393 DEBUG nova.compute.manager [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing instance network info cache due to event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:22:59 compute-0 nova_compute[185389]: 2026-01-26 17:22:59.031 185393 DEBUG oslo_concurrency.lockutils [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:22:59 compute-0 podman[201244]: time="2026-01-26T17:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:22:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:22:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4848 "" "Go-http-client/1.1"
Jan 26 17:23:00 compute-0 nova_compute[185389]: 2026-01-26 17:23:00.882 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:01 compute-0 nova_compute[185389]: 2026-01-26 17:23:01.017 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:01 compute-0 openstack_network_exporter[204387]: ERROR   17:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:23:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:23:01 compute-0 openstack_network_exporter[204387]: ERROR   17:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:23:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:01.777 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:01.778 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:01.778 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.162 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.404 185393 DEBUG nova.network.neutron [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.426 185393 DEBUG oslo_concurrency.lockutils [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.426 185393 DEBUG nova.compute.manager [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.427 185393 DEBUG nova.compute.manager [None req-c855e9dc-5d86-4fa2-a18f-d6c59647f36e 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] network_info to inject: |[{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.429 185393 DEBUG oslo_concurrency.lockutils [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:02 compute-0 nova_compute[185389]: 2026-01-26 17:23:02.429 185393 DEBUG nova.network.neutron [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:05 compute-0 nova_compute[185389]: 2026-01-26 17:23:05.884 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:06 compute-0 nova_compute[185389]: 2026-01-26 17:23:06.565 185393 DEBUG nova.objects.instance [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lazy-loading 'flavor' on Instance uuid 3a17d6a2-7bda-406b-a180-049f0e7adc78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:06 compute-0 nova_compute[185389]: 2026-01-26 17:23:06.600 185393 DEBUG oslo_concurrency.lockutils [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.166 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.292 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.331 185393 DEBUG nova.network.neutron [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updated VIF entry in instance network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.331 185393 DEBUG nova.network.neutron [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.604 185393 DEBUG oslo_concurrency.lockutils [req-92e0b85f-8bb6-44e2-af7b-e17c4f49e43d req-ccfb54c3-b971-45dc-b832-e1f1a66fc159 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:07 compute-0 nova_compute[185389]: 2026-01-26 17:23:07.604 185393 DEBUG oslo_concurrency.lockutils [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.265 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.266 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.267 185393 INFO nova.compute.manager [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Rebooting instance
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.303 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.303 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:08 compute-0 nova_compute[185389]: 2026-01-26 17:23:08.304 185393 DEBUG nova.network.neutron [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:23:09 compute-0 podman[257190]: 2026-01-26 17:23:09.196092506 +0000 UTC m=+0.072321543 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:23:09 compute-0 podman[257188]: 2026-01-26 17:23:09.19986659 +0000 UTC m=+0.081039991 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git)
Jan 26 17:23:09 compute-0 podman[257189]: 2026-01-26 17:23:09.215988427 +0000 UTC m=+0.095370979 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120)
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.153 185393 DEBUG nova.network.neutron [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.529 185393 DEBUG nova.compute.manager [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.530 185393 DEBUG nova.compute.manager [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing instance network info cache due to event network-changed-244cc784-cc22-4baa-ae9b-a9648a2a11b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.530 185393 DEBUG oslo_concurrency.lockutils [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:10 compute-0 nova_compute[185389]: 2026-01-26 17:23:10.889 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:11 compute-0 podman[257250]: 2026-01-26 17:23:11.215864472 +0000 UTC m=+0.096857489 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:23:11 compute-0 nova_compute[185389]: 2026-01-26 17:23:11.675 185393 DEBUG nova.network.neutron [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:11 compute-0 nova_compute[185389]: 2026-01-26 17:23:11.884 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:11 compute-0 nova_compute[185389]: 2026-01-26 17:23:11.887 185393 DEBUG nova.compute.manager [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:12 compute-0 kernel: tap6e11a3e1-dc (unregistering): left promiscuous mode
Jan 26 17:23:12 compute-0 NetworkManager[56253]: <info>  [1769448192.0485] device (tap6e11a3e1-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:23:12 compute-0 ovn_controller[97699]: 2026-01-26T17:23:12Z|00101|binding|INFO|Releasing lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 from this chassis (sb_readonly=0)
Jan 26 17:23:12 compute-0 ovn_controller[97699]: 2026-01-26T17:23:12Z|00102|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 down in Southbound
Jan 26 17:23:12 compute-0 ovn_controller[97699]: 2026-01-26T17:23:12Z|00103|binding|INFO|Removing iface tap6e11a3e1-dc ovn-installed in OVS
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.062 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.069 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ea:64 10.100.0.5'], port_security=['fa:16:3e:b3:ea:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '186e87cb-beb9-48df-8b10-dfc5c8afe996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34094d50-e876-4bbe-985c-d748419fede6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0b14c64-3c3f-4e5b-a736-e555c8460dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.070 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.071 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 in datapath 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac unbound from our chassis
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.074 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.077 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c104b6-2fbc-4782-8af6-ff440ffac48b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.078 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac namespace which is not needed anymore
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.078 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 26 17:23:12 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 41.075s CPU time.
Jan 26 17:23:12 compute-0 systemd-machined[156679]: Machine qemu-7-instance-00000007 terminated.
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.169 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [NOTICE]   (256244) : haproxy version is 2.8.14-c23fe91
Jan 26 17:23:12 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [NOTICE]   (256244) : path to executable is /usr/sbin/haproxy
Jan 26 17:23:12 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [WARNING]  (256244) : Exiting Master process...
Jan 26 17:23:12 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [ALERT]    (256244) : Current worker (256246) exited with code 143 (Terminated)
Jan 26 17:23:12 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[256239]: [WARNING]  (256244) : All workers exited. Exiting... (0)
Jan 26 17:23:12 compute-0 systemd[1]: libpod-f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f.scope: Deactivated successfully.
Jan 26 17:23:12 compute-0 podman[257295]: 2026-01-26 17:23:12.2623553 +0000 UTC m=+0.078677816 container died f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.300 185393 INFO nova.virt.libvirt.driver [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance destroyed successfully.
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.301 185393 DEBUG nova.objects.instance [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'resources' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f-userdata-shm.mount: Deactivated successfully.
Jan 26 17:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6bbc1152a2df0776e444e2908cf51ed6f48a74f65b97babea768769da43c12f-merged.mount: Deactivated successfully.
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.322 185393 DEBUG nova.virt.libvirt.vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:21:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.323 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.324 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.324 185393 DEBUG os_vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.331 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e11a3e1-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.335 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.337 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:23:12 compute-0 podman[257295]: 2026-01-26 17:23:12.34226395 +0000 UTC m=+0.158586426 container cleanup f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.343 185393 INFO os_vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc')
Jan 26 17:23:12 compute-0 systemd[1]: libpod-conmon-f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f.scope: Deactivated successfully.
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.357 185393 DEBUG nova.virt.libvirt.driver [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start _get_guest_xml network_info=[{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.368 185393 WARNING nova.virt.libvirt.driver [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.385 185393 DEBUG nova.virt.libvirt.host [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.386 185393 DEBUG nova.virt.libvirt.host [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:23:12 compute-0 podman[257321]: 2026-01-26 17:23:12.389328327 +0000 UTC m=+0.105769892 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.393 185393 DEBUG nova.virt.libvirt.host [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.394 185393 DEBUG nova.virt.libvirt.host [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.394 185393 DEBUG nova.virt.libvirt.driver [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.395 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.396 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.396 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.397 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.397 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.397 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.398 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.398 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.399 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.400 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.400 185393 DEBUG nova.virt.hardware [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.401 185393 DEBUG nova.objects.instance [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.424 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:12 compute-0 podman[257358]: 2026-01-26 17:23:12.449237294 +0000 UTC m=+0.069853958 container remove f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.462 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7f246a0d-c161-450f-aefb-25cec97431ef]: (4, ('Mon Jan 26 05:23:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac (f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f)\nf7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f\nMon Jan 26 05:23:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac (f7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f)\nf7c2bca5356e56f633649d9087a134e5db5082177d8f65ef784095d1d5566e8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.465 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[4593e2cb-5caf-4716-9a42-3c9c41a513e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.466 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a7c91d4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.469 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 kernel: tap4a7c91d4-b0: left promiscuous mode
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.487 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[25866eba-4b4d-4332-bae5-fffa27cdba1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.504 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1b33b849-0701-4c87-a435-7e43533a8ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.506 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[28ebf853-b7f7-42c2-a772-a965e2ec4e81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.510 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.511 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.511 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.512 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.514 185393 DEBUG nova.virt.libvirt.vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:21:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.514 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.516 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.517 185393 DEBUG nova.objects.instance [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'pci_devices' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.527 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd77a16-1c63-4ba7-bc78-38e392812904]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675718, 'reachable_time': 29974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257375, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.532 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:23:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a7c91d4\x2db0d3\x2d4f29\x2dad26\x2de78aa433d3ac.mount: Deactivated successfully.
Jan 26 17:23:12 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:12.532 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[ad58a180-8d5d-4601-9d29-d66764ffffe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.534 185393 DEBUG nova.virt.libvirt.driver [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <uuid>186e87cb-beb9-48df-8b10-dfc5c8afe996</uuid>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <name>instance-00000007</name>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:name>tempest-ServerActionsTestJSON-server-34810632</nova:name>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:23:12</nova:creationTime>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:user uuid="6acd3be55c754b3dbf8ef6c0922b18ae">tempest-ServerActionsTestJSON-254851137-project-member</nova:user>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:project uuid="9b9ff6ad3012499db2eb0a82a1ccbcaa">tempest-ServerActionsTestJSON-254851137</nova:project>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         <nova:port uuid="6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3">
Jan 26 17:23:12 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <system>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="serial">186e87cb-beb9-48df-8b10-dfc5c8afe996</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="uuid">186e87cb-beb9-48df-8b10-dfc5c8afe996</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </system>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <os>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </os>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <features>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </features>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk.config"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:b3:ea:64"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <target dev="tap6e11a3e1-dc"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/console.log" append="off"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <video>
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </video>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <input type="keyboard" bus="usb"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:23:12 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:23:12 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:23:12 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:23:12 compute-0 nova_compute[185389]: </domain>
Jan 26 17:23:12 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.536 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.607 185393 DEBUG nova.network.neutron [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.611 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.612 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.633 185393 DEBUG oslo_concurrency.lockutils [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.634 185393 DEBUG nova.compute.manager [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.635 185393 DEBUG nova.compute.manager [None req-cefc2d4a-b5f0-498f-bc4e-681e98e6a558 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] network_info to inject: |[{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.639 185393 DEBUG oslo_concurrency.lockutils [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.640 185393 DEBUG nova.network.neutron [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Refreshing network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.677 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.680 185393 DEBUG nova.objects.instance [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'trusted_certs' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.715 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.786 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.787 185393 DEBUG nova.virt.disk.api [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Checking if we can resize image /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.788 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.867 185393 DEBUG oslo_concurrency.processutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.868 185393 DEBUG nova.virt.disk.api [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Cannot resize image /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.869 185393 DEBUG nova.objects.instance [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'migration_context' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.890 185393 DEBUG nova.virt.libvirt.vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:21:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:23:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.891 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.892 185393 DEBUG nova.network.os_vif_util [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.892 185393 DEBUG os_vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.894 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.895 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.898 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.898 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e11a3e1-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.899 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e11a3e1-dc, col_values=(('external_ids', {'iface-id': '6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:ea:64', 'vm-uuid': '186e87cb-beb9-48df-8b10-dfc5c8afe996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:12 compute-0 NetworkManager[56253]: <info>  [1769448192.9034] manager: (tap6e11a3e1-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.904 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.909 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:12 compute-0 nova_compute[185389]: 2026-01-26 17:23:12.911 185393 INFO os_vif [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc')
Jan 26 17:23:13 compute-0 kernel: tap6e11a3e1-dc: entered promiscuous mode
Jan 26 17:23:13 compute-0 systemd-udevd[257277]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.0907] manager: (tap6e11a3e1-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.093 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ovn_controller[97699]: 2026-01-26T17:23:13Z|00104|binding|INFO|Claiming lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for this chassis.
Jan 26 17:23:13 compute-0 ovn_controller[97699]: 2026-01-26T17:23:13Z|00105|binding|INFO|6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3: Claiming fa:16:3e:b3:ea:64 10.100.0.5
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.1082] device (tap6e11a3e1-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:23:13 compute-0 ovn_controller[97699]: 2026-01-26T17:23:13Z|00106|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 ovn-installed in OVS
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.111 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.1121] device (tap6e11a3e1-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.114 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ea:64 10.100.0.5'], port_security=['fa:16:3e:b3:ea:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '186e87cb-beb9-48df-8b10-dfc5c8afe996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34094d50-e876-4bbe-985c-d748419fede6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0b14c64-3c3f-4e5b-a736-e555c8460dfa, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.115 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 in datapath 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac bound to our chassis
Jan 26 17:23:13 compute-0 ovn_controller[97699]: 2026-01-26T17:23:13Z|00107|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 up in Southbound
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.117 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.117 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.130 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8692ba71-fea0-4354-a7ee-ee25763ef6bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.132 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a7c91d4-b1 in ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.133 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a7c91d4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.134 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8effdd99-8e75-430f-9c07-12fdea7f6f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.135 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[90a65fa0-f0af-48cb-932e-0f670186ca60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 systemd-machined[156679]: New machine qemu-11-instance-00000007.
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.147 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[849bef4a-3f46-403c-bedb-896800097a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000007.
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.167 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f6938bfb-32dc-45d1-b25f-8f46ebf86ce7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.205 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac359b75-934a-4e17-be74-0af2b158c414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.212 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[acc52bea-5077-44d8-a6d8-fcf1ec6dfc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.2130] manager: (tap4a7c91d4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.249 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[f11adc67-1678-4efb-bb5f-9a52525d1bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.252 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[84818e69-84ba-4d79-b14f-55776ed3ec93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.2765] device (tap4a7c91d4-b0): carrier: link connected
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.282 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd0c522-1d52-4eed-ac56-17a8cd57c238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.309 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[698a5939-10cc-4d44-8004-d364456d21e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a7c91d4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:1e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684177, 'reachable_time': 40464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257434, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.330 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[73324d19-03ff-4593-8eea-8705661735cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:1e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 684177, 'tstamp': 684177}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257435, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.358 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cef78d6a-4c28-4638-bedb-9774a269a7c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a7c91d4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:1e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684177, 'reachable_time': 40464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257436, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.393 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[15efbaa5-fbff-45b4-b4bb-451527ac9635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.469 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3b226383-451e-4371-af47-2bfffa5f895a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.471 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a7c91d4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.472 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.472 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a7c91d4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.475 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 NetworkManager[56253]: <info>  [1769448193.4764] manager: (tap4a7c91d4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 26 17:23:13 compute-0 kernel: tap4a7c91d4-b0: entered promiscuous mode
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.480 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.481 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a7c91d4-b0, col_values=(('external_ids', {'iface-id': 'd58b7d53-5cc1-4ed8-aa06-162121fd1800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ovn_controller[97699]: 2026-01-26T17:23:13Z|00108|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.493 185393 DEBUG nova.virt.libvirt.host [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Removed pending event for 186e87cb-beb9-48df-8b10-dfc5c8afe996 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.493 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448193.4926348, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.494 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Resumed (Lifecycle Event)
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.496 185393 DEBUG nova.compute.manager [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.502 185393 INFO nova.virt.libvirt.driver [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance rebooted successfully.
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.503 185393 DEBUG nova.compute.manager [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.504 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.506 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.507 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e8136bb7-fad2-4b83-8ec8-e58b65900ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.509 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.pid.haproxy
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:23:13 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:13.510 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'env', 'PROCESS_TAG=haproxy-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.513 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.519 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.565 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.566 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448193.4971464, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.566 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Started (Lifecycle Event)
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.588 185393 DEBUG oslo_concurrency.lockutils [None req-bb3b79ea-4bca-4e43-812d-c5ae9a037f30 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.593 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.599 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:13 compute-0 nova_compute[185389]: 2026-01-26 17:23:13.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:23:14 compute-0 podman[257474]: 2026-01-26 17:23:14.00736126 +0000 UTC m=+0.128259793 container create baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:23:14 compute-0 podman[257474]: 2026-01-26 17:23:13.920132712 +0000 UTC m=+0.041031295 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.043 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 systemd[1]: Started libpod-conmon-baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875.scope.
Jan 26 17:23:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514e8c7ba7a129be57292d6ade41d4de67ec50104c27738e43acebb517cb4b03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:14 compute-0 podman[257474]: 2026-01-26 17:23:14.183798949 +0000 UTC m=+0.304697502 container init baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:23:14 compute-0 podman[257474]: 2026-01-26 17:23:14.202992381 +0000 UTC m=+0.323890934 container start baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [NOTICE]   (257493) : New worker (257495) forked
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [NOTICE]   (257493) : Loading success.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.555 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.556 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.556 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.556 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.556 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.558 185393 INFO nova.compute.manager [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Terminating instance
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.559 185393 DEBUG nova.compute.manager [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:23:14 compute-0 kernel: tap244cc784-cc (unregistering): left promiscuous mode
Jan 26 17:23:14 compute-0 NetworkManager[56253]: <info>  [1769448194.5978] device (tap244cc784-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.610 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 ovn_controller[97699]: 2026-01-26T17:23:14Z|00109|binding|INFO|Releasing lport 244cc784-cc22-4baa-ae9b-a9648a2a11b8 from this chassis (sb_readonly=0)
Jan 26 17:23:14 compute-0 ovn_controller[97699]: 2026-01-26T17:23:14Z|00110|binding|INFO|Setting lport 244cc784-cc22-4baa-ae9b-a9648a2a11b8 down in Southbound
Jan 26 17:23:14 compute-0 ovn_controller[97699]: 2026-01-26T17:23:14Z|00111|binding|INFO|Removing iface tap244cc784-cc ovn-installed in OVS
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.624 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.626 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:38:b5 10.100.0.3'], port_security=['fa:16:3e:95:38:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3a17d6a2-7bda-406b-a180-049f0e7adc78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2973d9a-cd90-4302-94cd-5d199c633af0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63b4132d471f40c4bc46982b5adba0ec', 'neutron:revision_number': '6', 'neutron:security_group_ids': '29bb6900-aedc-4398-903a-a870631fd529', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3af21038-3c7d-4aaa-9df8-6451de57b700, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=244cc784-cc22-4baa-ae9b-a9648a2a11b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.627 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.628 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 244cc784-cc22-4baa-ae9b-a9648a2a11b8 in datapath f2973d9a-cd90-4302-94cd-5d199c633af0 unbound from our chassis
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.630 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2973d9a-cd90-4302-94cd-5d199c633af0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.631 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[963066e3-b28c-4bd5-a4f0-c55599812175]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.632 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0 namespace which is not needed anymore
Jan 26 17:23:14 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 26 17:23:14 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 41.705s CPU time.
Jan 26 17:23:14 compute-0 systemd-machined[156679]: Machine qemu-10-instance-0000000a terminated.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.787 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.793 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.808 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.831 185393 INFO nova.virt.libvirt.driver [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Instance destroyed successfully.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.832 185393 DEBUG nova.objects.instance [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lazy-loading 'resources' on Instance uuid 3a17d6a2-7bda-406b-a180-049f0e7adc78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [NOTICE]   (256712) : haproxy version is 2.8.14-c23fe91
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [NOTICE]   (256712) : path to executable is /usr/sbin/haproxy
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [WARNING]  (256712) : Exiting Master process...
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [WARNING]  (256712) : Exiting Master process...
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.840 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [ALERT]    (256712) : Current worker (256716) exited with code 143 (Terminated)
Jan 26 17:23:14 compute-0 neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0[256694]: [WARNING]  (256712) : All workers exited. Exiting... (0)
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.846 185393 DEBUG nova.virt.libvirt.vif [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1027940370',display_name='tempest-AttachInterfacesUnderV243Test-server-1027940370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1027940370',id=10,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA4tlpIA5xPYWg25OrWZf25mgJJUQgHpl0o+5am0huMtCCdzeNB4+BNDx48EvTBsdSFA3wCFEGCW1Btwh4puP8AnxRuaEzCk2E9GsGP0ChphDhSWKC/2GFYoPfdzwRjhw==',key_name='tempest-keypair-818388180',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:22:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63b4132d471f40c4bc46982b5adba0ec',ramdisk_id='',reservation_id='r-zpu0djkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-286995791',owner_user_name='tempest-AttachInterfacesUnderV243Test-286995791-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ac7a648f1b542b193f88ff9b120f211',uuid=3a17d6a2-7bda-406b-a180-049f0e7adc78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.846 185393 DEBUG nova.network.os_vif_util [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converting VIF {"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:14 compute-0 systemd[1]: libpod-7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a.scope: Deactivated successfully.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.847 185393 DEBUG nova.network.os_vif_util [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.847 185393 DEBUG os_vif [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.850 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.850 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244cc784-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.852 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 podman[257523]: 2026-01-26 17:23:14.854166107 +0000 UTC m=+0.091934717 container died 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.855 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.859 185393 INFO os_vif [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:38:b5,bridge_name='br-int',has_traffic_filtering=True,id=244cc784-cc22-4baa-ae9b-a9648a2a11b8,network=Network(f2973d9a-cd90-4302-94cd-5d199c633af0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244cc784-cc')
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.862 185393 INFO nova.virt.libvirt.driver [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Deleting instance files /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78_del
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.863 185393 INFO nova.virt.libvirt.driver [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Deletion of /var/lib/nova/instances/3a17d6a2-7bda-406b-a180-049f0e7adc78_del complete
Jan 26 17:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a-userdata-shm.mount: Deactivated successfully.
Jan 26 17:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4672e57531aadbbcebb7ff295f2d92d3d0d91ab2c3242d3e80414c8571b0b6ee-merged.mount: Deactivated successfully.
Jan 26 17:23:14 compute-0 podman[257523]: 2026-01-26 17:23:14.908214324 +0000 UTC m=+0.145982944 container cleanup 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:23:14 compute-0 systemd[1]: libpod-conmon-7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a.scope: Deactivated successfully.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.927 185393 INFO nova.compute.manager [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.927 185393 DEBUG oslo.service.loopingcall [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.928 185393 DEBUG nova.compute.manager [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:23:14 compute-0 nova_compute[185389]: 2026-01-26 17:23:14.928 185393 DEBUG nova.network.neutron [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:23:14 compute-0 podman[257568]: 2026-01-26 17:23:14.988660738 +0000 UTC m=+0.051226642 container remove 7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.996 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[de389a7f-6246-452b-bd1b-9eb58b04dcab]: (4, ('Mon Jan 26 05:23:14 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0 (7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a)\n7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a\nMon Jan 26 05:23:14 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0 (7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a)\n7e3dfac1ba700992e3453295463834da6afac3328b6cd2fe92bb6c762c35982a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:14 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:14.999 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2e037f53-2471-4e40-a505-cdb4034c6ddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.000 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2973d9a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:15 compute-0 nova_compute[185389]: 2026-01-26 17:23:15.003 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:15 compute-0 kernel: tapf2973d9a-c0: left promiscuous mode
Jan 26 17:23:15 compute-0 nova_compute[185389]: 2026-01-26 17:23:15.020 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.021 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5654e8-0209-4caf-8173-44d0108f54f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.039 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f5471d13-d51b-4b9b-bb1c-cc2543238dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.041 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6dde6379-c2d2-492a-824c-d0899319aa44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.058 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[eb251291-4501-4638-a006-863e5884bd4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677732, 'reachable_time': 32613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257581, 'error': None, 'target': 'ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 systemd[1]: run-netns-ovnmeta\x2df2973d9a\x2dcd90\x2d4302\x2d94cd\x2d5d199c633af0.mount: Deactivated successfully.
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.064 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f2973d9a-cd90-4302-94cd-5d199c633af0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:23:15 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:15.064 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[a73b870e-7066-450b-a2e7-468962d77e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:15 compute-0 nova_compute[185389]: 2026-01-26 17:23:15.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:16 compute-0 podman[257584]: 2026-01-26 17:23:16.197986436 +0000 UTC m=+0.079663814 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:23:16 compute-0 podman[257585]: 2026-01-26 17:23:16.210558937 +0000 UTC m=+0.090933369 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_id=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, name=ubi9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 17:23:16 compute-0 nova_compute[185389]: 2026-01-26 17:23:16.216 185393 DEBUG nova.network.neutron [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updated VIF entry in instance network info cache for port 244cc784-cc22-4baa-ae9b-a9648a2a11b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:16 compute-0 nova_compute[185389]: 2026-01-26 17:23:16.217 185393 DEBUG nova.network.neutron [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [{"id": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "address": "fa:16:3e:95:38:b5", "network": {"id": "f2973d9a-cd90-4302-94cd-5d199c633af0", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1831496438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63b4132d471f40c4bc46982b5adba0ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244cc784-cc", "ovs_interfaceid": "244cc784-cc22-4baa-ae9b-a9648a2a11b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:16 compute-0 nova_compute[185389]: 2026-01-26 17:23:16.233 185393 DEBUG oslo_concurrency.lockutils [req-79f310df-b056-4ced-882e-1c3506642a70 req-fef32d58-bf96-41f2-a745-2fda7fce76a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-3a17d6a2-7bda-406b-a180-049f0e7adc78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:16 compute-0 podman[257583]: 2026-01-26 17:23:16.271528902 +0000 UTC m=+0.148053830 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 26 17:23:16 compute-0 nova_compute[185389]: 2026-01-26 17:23:16.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.206 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.207 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.243 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.501 185393 DEBUG nova.compute.manager [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-unplugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.501 185393 DEBUG oslo_concurrency.lockutils [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.502 185393 DEBUG oslo_concurrency.lockutils [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.502 185393 DEBUG oslo_concurrency.lockutils [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.502 185393 DEBUG nova.compute.manager [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] No waiting events found dispatching network-vif-unplugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.503 185393 WARNING nova.compute.manager [req-ba962477-df72-4d6c-88a2-56daf6a89706 req-6af213e3-d917-4de2-9645-28624b90c816 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received unexpected event network-vif-unplugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for instance with vm_state active and task_state None.
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.530 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.530 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.542 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.543 185393 INFO nova.compute.claims [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.585 185393 DEBUG nova.network.neutron [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.623 185393 INFO nova.compute.manager [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Took 2.70 seconds to deallocate network for instance.
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.675 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:17 compute-0 nova_compute[185389]: 2026-01-26 17:23:17.907 185393 DEBUG nova.scheduler.client.report [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.062 185393 DEBUG nova.scheduler.client.report [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.062 185393 DEBUG nova.compute.provider_tree [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.070 185393 DEBUG nova.compute.manager [req-e1f61190-bd9b-4c53-89c8-d07fd0eeea51 req-64dfc767-f819-49d2-a98d-5b6a6def1f5f 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Received event network-vif-deleted-244cc784-cc22-4baa-ae9b-a9648a2a11b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.088 185393 DEBUG nova.scheduler.client.report [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.113 185393 DEBUG nova.scheduler.client.report [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.203 185393 DEBUG nova.compute.provider_tree [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.267 185393 DEBUG nova.scheduler.client.report [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.307 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.308 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.312 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.372 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.373 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.400 185393 INFO nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.416 185393 DEBUG nova.compute.provider_tree [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.419 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.440 185393 DEBUG nova.scheduler.client.report [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.677 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.684 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.686 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.686 185393 INFO nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Creating image(s)
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.687 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.687 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.688 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.702 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.722 185393 INFO nova.scheduler.client.report [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Deleted allocations for instance 3a17d6a2-7bda-406b-a180-049f0e7adc78
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.725 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.778 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.780 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.780 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.793 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.814 185393 DEBUG oslo_concurrency.lockutils [None req-854f716a-a72f-4a27-b6d9-0f0bd5d9206a 0ac7a648f1b542b193f88ff9b120f211 63b4132d471f40c4bc46982b5adba0ec - - default default] Lock "3a17d6a2-7bda-406b-a180-049f0e7adc78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.868 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.868 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.925 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk 1073741824" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.927 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.927 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.991 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.992 185393 DEBUG nova.virt.disk.api [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Checking if we can resize image /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:23:18 compute-0 nova_compute[185389]: 2026-01-26 17:23:18.992 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.068 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.070 185393 DEBUG nova.virt.disk.api [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Cannot resize image /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.070 185393 DEBUG nova.objects.instance [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'migration_context' on Instance uuid cf6218c0-bc2c-4097-91df-f60657ef7ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.139 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.139 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Ensure instance console log exists: /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.140 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.141 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.141 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.298 185393 DEBUG nova.policy [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04a28d3bd7648abb04b59df0aeee0aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72e07b00ccf54deaa85258e2c3332b45', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.624 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.625 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.625 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.626 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.626 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] No waiting events found dispatching network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.627 185393 WARNING nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received unexpected event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for instance with vm_state active and task_state None.
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.627 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.627 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.628 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.628 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.628 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] No waiting events found dispatching network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.629 185393 WARNING nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received unexpected event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for instance with vm_state active and task_state None.
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.629 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.629 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.630 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.630 185393 DEBUG oslo_concurrency.lockutils [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.630 185393 DEBUG nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] No waiting events found dispatching network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.631 185393 WARNING nova.compute.manager [req-1215baf2-6dc5-4cde-9d83-fc72d9ad82e8 req-fd4cdbd5-b091-4f28-90eb-0220c1f99eb3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received unexpected event network-vif-plugged-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 for instance with vm_state active and task_state None.
Jan 26 17:23:19 compute-0 nova_compute[185389]: 2026-01-26 17:23:19.856 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:20 compute-0 nova_compute[185389]: 2026-01-26 17:23:20.894 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:21 compute-0 nova_compute[185389]: 2026-01-26 17:23:21.085 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Successfully created port: 994f4b51-014f-469e-9096-4ffe2dafa019 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.377 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.752 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.753 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.753 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.754 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.845 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.914 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.915 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.983 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Successfully updated port: 994f4b51-014f-469e-9096-4ffe2dafa019 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:23:22 compute-0 nova_compute[185389]: 2026-01-26 17:23:22.987 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.186 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.187 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquired lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.187 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.356 185393 DEBUG nova.compute.manager [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.357 185393 DEBUG nova.compute.manager [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing instance network info cache due to event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.358 185393 DEBUG oslo_concurrency.lockutils [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.383 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.385 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=72.34943008422852GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.386 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.386 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.707 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 186e87cb-beb9-48df-8b10-dfc5c8afe996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.708 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance cf6218c0-bc2c-4097-91df-f60657ef7ab1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.715 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.716 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.751 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.799 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.833 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.870 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:23:23 compute-0 nova_compute[185389]: 2026-01-26 17:23:23.870 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:24 compute-0 nova_compute[185389]: 2026-01-26 17:23:24.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:24 compute-0 nova_compute[185389]: 2026-01-26 17:23:24.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:23:24 compute-0 nova_compute[185389]: 2026-01-26 17:23:24.753 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:23:24 compute-0 nova_compute[185389]: 2026-01-26 17:23:24.861 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.895 185393 DEBUG nova.network.neutron [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.897 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.933 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Releasing lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.933 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Instance network_info: |[{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.933 185393 DEBUG oslo_concurrency.lockutils [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.934 185393 DEBUG nova.network.neutron [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.936 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Start _get_guest_xml network_info=[{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.943 185393 WARNING nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.948 185393 DEBUG nova.virt.libvirt.host [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.949 185393 DEBUG nova.virt.libvirt.host [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.952 185393 DEBUG nova.virt.libvirt.host [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.953 185393 DEBUG nova.virt.libvirt.host [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.953 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.953 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.954 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.954 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.954 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.954 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.954 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.955 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.955 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.955 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.955 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.955 185393 DEBUG nova.virt.hardware [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.958 185393 DEBUG nova.virt.libvirt.vif [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-979678882',display_name='tempest-TestNetworkBasicOps-server-979678882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-979678882',id=11,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+jha5o/aZq5uZdccmmJmVbVXMmdJ9yvermTWC6rreNImtyIBQbEkIIBt+QllF3Pluku08MzARjYDJ2ncgmid88GHIWnOSOFYqddg/+d8y/J6sZxMXgV9oLcscbo2PVKg==',key_name='tempest-TestNetworkBasicOps-1082130080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-kk5vnpdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:23:18Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=cf6218c0-bc2c-4097-91df-f60657ef7ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.959 185393 DEBUG nova.network.os_vif_util [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.959 185393 DEBUG nova.network.os_vif_util [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.960 185393 DEBUG nova.objects.instance [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf6218c0-bc2c-4097-91df-f60657ef7ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.996 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <uuid>cf6218c0-bc2c-4097-91df-f60657ef7ab1</uuid>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <name>instance-0000000b</name>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:name>tempest-TestNetworkBasicOps-server-979678882</nova:name>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:23:25</nova:creationTime>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:user uuid="a04a28d3bd7648abb04b59df0aeee0aa">tempest-TestNetworkBasicOps-420464940-project-member</nova:user>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:project uuid="72e07b00ccf54deaa85258e2c3332b45">tempest-TestNetworkBasicOps-420464940</nova:project>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         <nova:port uuid="994f4b51-014f-469e-9096-4ffe2dafa019">
Jan 26 17:23:25 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <system>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="serial">cf6218c0-bc2c-4097-91df-f60657ef7ab1</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="uuid">cf6218c0-bc2c-4097-91df-f60657ef7ab1</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </system>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <os>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </os>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <features>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </features>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.config"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:d9:71:2d"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <target dev="tap994f4b51-01"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/console.log" append="off"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <video>
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </video>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:23:25 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:23:25 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:23:25 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:23:25 compute-0 nova_compute[185389]: </domain>
Jan 26 17:23:25 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.996 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Preparing to wait for external event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.996 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.997 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:25 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.997 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.998 185393 DEBUG nova.virt.libvirt.vif [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-979678882',display_name='tempest-TestNetworkBasicOps-server-979678882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-979678882',id=11,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+jha5o/aZq5uZdccmmJmVbVXMmdJ9yvermTWC6rreNImtyIBQbEkIIBt+QllF3Pluku08MzARjYDJ2ncgmid88GHIWnOSOFYqddg/+d8y/J6sZxMXgV9oLcscbo2PVKg==',key_name='tempest-TestNetworkBasicOps-1082130080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-kk5vnpdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:23:18Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=cf6218c0-bc2c-4097-91df-f60657ef7ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.998 185393 DEBUG nova.network.os_vif_util [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:25.999 185393 DEBUG nova.network.os_vif_util [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.000 185393 DEBUG os_vif [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.001 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.001 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.002 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.011 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.012 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap994f4b51-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.012 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap994f4b51-01, col_values=(('external_ids', {'iface-id': '994f4b51-014f-469e-9096-4ffe2dafa019', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:71:2d', 'vm-uuid': 'cf6218c0-bc2c-4097-91df-f60657ef7ab1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.014 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:26 compute-0 NetworkManager[56253]: <info>  [1769448206.0154] manager: (tap994f4b51-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.016 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.024 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.024 185393 INFO os_vif [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01')
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.268 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.269 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.353 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.396 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.396 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.397 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No VIF found with MAC fa:16:3e:d9:71:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.397 185393 INFO nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Using config drive
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.472 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.473 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.482 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.482 185393 INFO nova.compute.claims [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.717 185393 DEBUG nova.compute.provider_tree [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.733 185393 DEBUG nova.scheduler.client.report [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.770 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.771 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.833 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.834 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.856 185393 INFO nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:23:26 compute-0 nova_compute[185389]: 2026-01-26 17:23:26.882 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.018 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.019 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.019 185393 INFO nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Creating image(s)
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.020 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.020 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.021 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.035 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.098 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.100 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.101 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.126 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.194 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.195 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.236 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.237 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.237 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.308 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.309 185393 DEBUG nova.virt.disk.api [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Checking if we can resize image /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.310 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.374 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.375 185393 DEBUG nova.virt.disk.api [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Cannot resize image /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.376 185393 DEBUG nova.objects.instance [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lazy-loading 'migration_context' on Instance uuid 69a46725-8a69-43b6-a3bc-615971d6f0df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.416 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.416 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Ensure instance console log exists: /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.417 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.417 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.417 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.764 185393 INFO nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Creating config drive at /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.config
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.770 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3eiz4fd6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.792 185393 DEBUG nova.policy [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ba2aac01dc64b1f9c69a2a78d95c6d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff6e46591ae14b9183698121bab3d554', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.898 185393 DEBUG oslo_concurrency.processutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3eiz4fd6" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:27 compute-0 kernel: tap994f4b51-01: entered promiscuous mode
Jan 26 17:23:27 compute-0 ovn_controller[97699]: 2026-01-26T17:23:27Z|00112|binding|INFO|Claiming lport 994f4b51-014f-469e-9096-4ffe2dafa019 for this chassis.
Jan 26 17:23:27 compute-0 ovn_controller[97699]: 2026-01-26T17:23:27Z|00113|binding|INFO|994f4b51-014f-469e-9096-4ffe2dafa019: Claiming fa:16:3e:d9:71:2d 10.100.0.13
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.969 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:27 compute-0 NetworkManager[56253]: <info>  [1769448207.9718] manager: (tap994f4b51-01): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 26 17:23:27 compute-0 nova_compute[185389]: 2026-01-26 17:23:27.976 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:27 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:27.986 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:71:2d 10.100.0.13'], port_security=['fa:16:3e:d9:71:2d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cf6218c0-bc2c-4097-91df-f60657ef7ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72e07b00ccf54deaa85258e2c3332b45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cabd41bb-de87-4531-96ff-89d10e2bc223', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba7e92f1-bf2b-49e8-a683-c5ce4fc70674, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=994f4b51-014f-469e-9096-4ffe2dafa019) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:27 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:27.988 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 994f4b51-014f-469e-9096-4ffe2dafa019 in datapath 181e9ee7-4b3f-4c71-9f87-ee525fae0a23 bound to our chassis
Jan 26 17:23:27 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:27.990 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 181e9ee7-4b3f-4c71-9f87-ee525fae0a23
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.002 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.006 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.008 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d5545d96-a02a-46e7-9a28-dd1878fc95e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.009 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap181e9ee7-41 in ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:23:28 compute-0 ovn_controller[97699]: 2026-01-26T17:23:28Z|00114|binding|INFO|Setting lport 994f4b51-014f-469e-9096-4ffe2dafa019 ovn-installed in OVS
Jan 26 17:23:28 compute-0 ovn_controller[97699]: 2026-01-26T17:23:28Z|00115|binding|INFO|Setting lport 994f4b51-014f-469e-9096-4ffe2dafa019 up in Southbound
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.012 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.013 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap181e9ee7-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.013 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[c718cc0d-77af-4eb0-bad7-a9e43b6c9222]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.014 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[02e373b5-cf5f-4664-8919-80534cc0fd76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 systemd-udevd[257706]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:23:28 compute-0 systemd-machined[156679]: New machine qemu-12-instance-0000000b.
Jan 26 17:23:28 compute-0 NetworkManager[56253]: <info>  [1769448208.0388] device (tap994f4b51-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:23:28 compute-0 NetworkManager[56253]: <info>  [1769448208.0395] device (tap994f4b51-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:23:28 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.045 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[55060623-4676-4b74-9ee4-bf2865b31ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.083 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[06a6ce31-acbb-476f-92db-6bfa4f9d3095]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.132 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e814b8-a90c-4f3d-a2e5-1b52f2e4e7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 NetworkManager[56253]: <info>  [1769448208.1459] manager: (tap181e9ee7-40): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.145 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[89555f62-b6e4-4df7-87fe-1d4cdb20b79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.184 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[d15bfe23-8f66-4fca-b501-c7f1c82c03f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.189 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[219fffee-54ab-4096-b74b-6121b363c500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 NetworkManager[56253]: <info>  [1769448208.2205] device (tap181e9ee7-40): carrier: link connected
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.228 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[d44fe2aa-a52a-4c10-b43d-22fb14b079dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.255 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[5615f14a-2d32-4256-986c-4ab4b8bc43dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181e9ee7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:aa:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685671, 'reachable_time': 34350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257741, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.274 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d90fe5ac-82ef-4fab-ae22-6ec3288de517]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:aaf4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685671, 'tstamp': 685671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257742, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.300 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e454244f-374c-47fe-86a1-73de70576cbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181e9ee7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:aa:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685671, 'reachable_time': 34350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257743, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.335 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a20fc8f2-59b7-4b57-a445-5118fdc883dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.420 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[4b67bfa8-ac0c-4a01-84e1-1475b94d846b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.422 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181e9ee7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.423 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.423 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap181e9ee7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.425 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 NetworkManager[56253]: <info>  [1769448208.4258] manager: (tap181e9ee7-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 26 17:23:28 compute-0 kernel: tap181e9ee7-40: entered promiscuous mode
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.439 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap181e9ee7-40, col_values=(('external_ids', {'iface-id': 'dd4ac4a7-c264-4fc8-95aa-36a318cdf39e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.442 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 ovn_controller[97699]: 2026-01-26T17:23:28Z|00116|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.450 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/181e9ee7-4b3f-4c71-9f87-ee525fae0a23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/181e9ee7-4b3f-4c71-9f87-ee525fae0a23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.452 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[50cc1729-7ef2-4f0d-9404-726c5fa1e431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.453 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-181e9ee7-4b3f-4c71-9f87-ee525fae0a23
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/181e9ee7-4b3f-4c71-9f87-ee525fae0a23.pid.haproxy
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 181e9ee7-4b3f-4c71-9f87-ee525fae0a23
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:23:28 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:28.453 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'env', 'PROCESS_TAG=haproxy-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/181e9ee7-4b3f-4c71-9f87-ee525fae0a23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.465 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.642 185393 DEBUG nova.network.neutron [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updated VIF entry in instance network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.642 185393 DEBUG nova.network.neutron [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.672 185393 DEBUG oslo_concurrency.lockutils [req-69dc9534-ae83-4ce9-b5f1-bd746f59e10d req-32d05beb-3fa6-4e85-9c89-a78b99791f8a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:28 compute-0 nova_compute[185389]: 2026-01-26 17:23:28.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:23:29 compute-0 podman[257775]: 2026-01-26 17:23:28.922338388 +0000 UTC m=+0.033991754 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.047 185393 DEBUG nova.compute.manager [req-e522936a-d62e-4c19-8951-f14b76381739 req-bd12dfa6-700c-4ae2-b035-df6b80d8b69e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.048 185393 DEBUG oslo_concurrency.lockutils [req-e522936a-d62e-4c19-8951-f14b76381739 req-bd12dfa6-700c-4ae2-b035-df6b80d8b69e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.048 185393 DEBUG oslo_concurrency.lockutils [req-e522936a-d62e-4c19-8951-f14b76381739 req-bd12dfa6-700c-4ae2-b035-df6b80d8b69e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.049 185393 DEBUG oslo_concurrency.lockutils [req-e522936a-d62e-4c19-8951-f14b76381739 req-bd12dfa6-700c-4ae2-b035-df6b80d8b69e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.049 185393 DEBUG nova.compute.manager [req-e522936a-d62e-4c19-8951-f14b76381739 req-bd12dfa6-700c-4ae2-b035-df6b80d8b69e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Processing event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:23:29 compute-0 podman[257775]: 2026-01-26 17:23:29.097479702 +0000 UTC m=+0.209133038 container create b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:23:29 compute-0 systemd[1]: Started libpod-conmon-b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902.scope.
Jan 26 17:23:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c9620c5cf6ab314a2c5b3f74964911d8572b82cf7918efbba9e4920e5333fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:29 compute-0 podman[257775]: 2026-01-26 17:23:29.206500341 +0000 UTC m=+0.318153727 container init b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:23:29 compute-0 podman[257775]: 2026-01-26 17:23:29.215207828 +0000 UTC m=+0.326861164 container start b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:23:29 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [NOTICE]   (257794) : New worker (257796) forked
Jan 26 17:23:29 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [NOTICE]   (257794) : Loading success.
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.514 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448209.5139961, cf6218c0-bc2c-4097-91df-f60657ef7ab1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.514 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] VM Started (Lifecycle Event)
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.516 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.522 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.527 185393 INFO nova.virt.libvirt.driver [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Instance spawned successfully.
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.527 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.541 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.549 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.554 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.554 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.555 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.555 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.557 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.558 185393 DEBUG nova.virt.libvirt.driver [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.592 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.592 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448209.5140924, cf6218c0-bc2c-4097-91df-f60657ef7ab1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.592 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] VM Paused (Lifecycle Event)
Jan 26 17:23:29 compute-0 ovn_controller[97699]: 2026-01-26T17:23:29Z|00117|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:23:29 compute-0 ovn_controller[97699]: 2026-01-26T17:23:29Z|00118|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.646 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.652 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448209.5218165, cf6218c0-bc2c-4097-91df-f60657ef7ab1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.652 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] VM Resumed (Lifecycle Event)
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.664 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.673 185393 INFO nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Took 10.99 seconds to spawn the instance on the hypervisor.
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.674 185393 DEBUG nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.678 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.690 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:29 compute-0 podman[201244]: time="2026-01-26T17:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:23:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:23:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4841 "" "Go-http-client/1.1"
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.815 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.827 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448194.8264472, 3a17d6a2-7bda-406b-a180-049f0e7adc78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.828 185393 INFO nova.compute.manager [-] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] VM Stopped (Lifecycle Event)
Jan 26 17:23:29 compute-0 nova_compute[185389]: 2026-01-26 17:23:29.979 185393 INFO nova.compute.manager [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Took 12.67 seconds to build instance.
Jan 26 17:23:30 compute-0 nova_compute[185389]: 2026-01-26 17:23:30.143 185393 DEBUG nova.compute.manager [None req-95e15080-cbea-4211-ab76-5c2ad999c106 - - - - - -] [instance: 3a17d6a2-7bda-406b-a180-049f0e7adc78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:30 compute-0 nova_compute[185389]: 2026-01-26 17:23:30.475 185393 DEBUG oslo_concurrency.lockutils [None req-06503a45-1973-442f-b359-2ae34468a6c0 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:30 compute-0 nova_compute[185389]: 2026-01-26 17:23:30.734 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:30 compute-0 nova_compute[185389]: 2026-01-26 17:23:30.899 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:31 compute-0 nova_compute[185389]: 2026-01-26 17:23:31.014 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:31 compute-0 nova_compute[185389]: 2026-01-26 17:23:31.077 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Successfully created port: 867ab8e9-18b5-4260-b370-f39c517ff96b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:23:31 compute-0 openstack_network_exporter[204387]: ERROR   17:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:23:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:23:31 compute-0 openstack_network_exporter[204387]: ERROR   17:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:23:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.240 185393 DEBUG nova.compute.manager [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.241 185393 DEBUG oslo_concurrency.lockutils [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.241 185393 DEBUG oslo_concurrency.lockutils [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.241 185393 DEBUG oslo_concurrency.lockutils [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.241 185393 DEBUG nova.compute.manager [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] No waiting events found dispatching network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.242 185393 WARNING nova.compute.manager [req-f84aa73e-2a04-4fc2-b762-869423039596 req-047aa641-b32a-414a-a148-5ad44e5c74cf 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received unexpected event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 for instance with vm_state active and task_state None.
Jan 26 17:23:32 compute-0 nova_compute[185389]: 2026-01-26 17:23:32.572 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.243 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Successfully updated port: 867ab8e9-18b5-4260-b370-f39c517ff96b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.265 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.267 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquired lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.269 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.517 185393 DEBUG nova.compute.manager [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-changed-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.517 185393 DEBUG nova.compute.manager [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Refreshing instance network info cache due to event network-changed-867ab8e9-18b5-4260-b370-f39c517ff96b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.518 185393 DEBUG oslo_concurrency.lockutils [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:34 compute-0 nova_compute[185389]: 2026-01-26 17:23:34.608 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:23:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:35.382 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:35 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:35.383 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:23:35 compute-0 nova_compute[185389]: 2026-01-26 17:23:35.386 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:35 compute-0 nova_compute[185389]: 2026-01-26 17:23:35.902 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:36 compute-0 nova_compute[185389]: 2026-01-26 17:23:36.016 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:36 compute-0 nova_compute[185389]: 2026-01-26 17:23:36.967 185393 DEBUG nova.network.neutron [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updating instance_info_cache with network_info: [{"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.213 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Releasing lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.213 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Instance network_info: |[{"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.213 185393 DEBUG oslo_concurrency.lockutils [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.213 185393 DEBUG nova.network.neutron [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Refreshing network info cache for port 867ab8e9-18b5-4260-b370-f39c517ff96b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.216 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Start _get_guest_xml network_info=[{"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.223 185393 WARNING nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.246 185393 DEBUG nova.virt.libvirt.host [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.247 185393 DEBUG nova.virt.libvirt.host [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.253 185393 DEBUG nova.virt.libvirt.host [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.257 185393 DEBUG nova.virt.libvirt.host [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.259 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.261 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.262 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.263 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.264 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.264 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.264 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.265 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.265 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.265 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.265 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.266 185393 DEBUG nova.virt.hardware [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.269 185393 DEBUG nova.virt.libvirt.vif [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:23:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1917976453',display_name='tempest-ServersTestJSON-server-1917976453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1917976453',id=12,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDIIXWOAUU+wzgbCxTuZ1CgRTJmh6zVCSpX/ed86eRXUM4OBvjRpAPW6jrLl1JW/p7jyneusxoBg4x7+CE629CpwNm9y1Ynw3oRQYINFOZSrNmIngBP3qnxLcn75wwg+AA==',key_name='tempest-keypair-934613233',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff6e46591ae14b9183698121bab3d554',ramdisk_id='',reservation_id='r-ro8oi3lj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1750390716',owner_user_name='tempest-ServersTestJSON-1750390716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:23:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ba2aac01dc64b1f9c69a2a78d95c6d5',uuid=69a46725-8a69-43b6-a3bc-615971d6f0df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.270 185393 DEBUG nova.network.os_vif_util [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converting VIF {"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.271 185393 DEBUG nova.network.os_vif_util [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.272 185393 DEBUG nova.objects.instance [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69a46725-8a69-43b6-a3bc-615971d6f0df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.301 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <uuid>69a46725-8a69-43b6-a3bc-615971d6f0df</uuid>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <name>instance-0000000c</name>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:name>tempest-ServersTestJSON-server-1917976453</nova:name>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:23:37</nova:creationTime>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:user uuid="1ba2aac01dc64b1f9c69a2a78d95c6d5">tempest-ServersTestJSON-1750390716-project-member</nova:user>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:project uuid="ff6e46591ae14b9183698121bab3d554">tempest-ServersTestJSON-1750390716</nova:project>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         <nova:port uuid="867ab8e9-18b5-4260-b370-f39c517ff96b">
Jan 26 17:23:37 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <system>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="serial">69a46725-8a69-43b6-a3bc-615971d6f0df</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="uuid">69a46725-8a69-43b6-a3bc-615971d6f0df</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </system>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <os>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </os>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <features>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </features>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.config"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:52:a5:be"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <target dev="tap867ab8e9-18"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/console.log" append="off"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <video>
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </video>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:23:37 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:23:37 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:23:37 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:23:37 compute-0 nova_compute[185389]: </domain>
Jan 26 17:23:37 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.303 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Preparing to wait for external event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.304 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.306 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.311 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.312 185393 DEBUG nova.virt.libvirt.vif [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:23:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1917976453',display_name='tempest-ServersTestJSON-server-1917976453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1917976453',id=12,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDIIXWOAUU+wzgbCxTuZ1CgRTJmh6zVCSpX/ed86eRXUM4OBvjRpAPW6jrLl1JW/p7jyneusxoBg4x7+CE629CpwNm9y1Ynw3oRQYINFOZSrNmIngBP3qnxLcn75wwg+AA==',key_name='tempest-keypair-934613233',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff6e46591ae14b9183698121bab3d554',ramdisk_id='',reservation_id='r-ro8oi3lj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1750390716',owner_user_name='tempest-ServersTestJSON-1750390716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:23:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ba2aac01dc64b1f9c69a2a78d95c6d5',uuid=69a46725-8a69-43b6-a3bc-615971d6f0df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.313 185393 DEBUG nova.network.os_vif_util [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converting VIF {"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.313 185393 DEBUG nova.network.os_vif_util [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.314 185393 DEBUG os_vif [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.315 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.315 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.315 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.318 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.318 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap867ab8e9-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.319 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap867ab8e9-18, col_values=(('external_ids', {'iface-id': '867ab8e9-18b5-4260-b370-f39c517ff96b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:a5:be', 'vm-uuid': '69a46725-8a69-43b6-a3bc-615971d6f0df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.321 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:37 compute-0 NetworkManager[56253]: <info>  [1769448217.3221] manager: (tap867ab8e9-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.323 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.336 185393 INFO os_vif [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18')
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.491 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.492 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.492 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] No VIF found with MAC fa:16:3e:52:a5:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:23:37 compute-0 nova_compute[185389]: 2026-01-26 17:23:37.493 185393 INFO nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Using config drive
Jan 26 17:23:38 compute-0 nova_compute[185389]: 2026-01-26 17:23:38.534 185393 DEBUG nova.compute.manager [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:38 compute-0 nova_compute[185389]: 2026-01-26 17:23:38.535 185393 DEBUG nova.compute.manager [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing instance network info cache due to event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:23:38 compute-0 nova_compute[185389]: 2026-01-26 17:23:38.538 185393 DEBUG oslo_concurrency.lockutils [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:38 compute-0 nova_compute[185389]: 2026-01-26 17:23:38.541 185393 DEBUG oslo_concurrency.lockutils [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:38 compute-0 nova_compute[185389]: 2026-01-26 17:23:38.545 185393 DEBUG nova.network.neutron [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.020 185393 INFO nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Creating config drive at /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.config
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.032 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppfim427c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.189 185393 DEBUG oslo_concurrency.processutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppfim427c" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.3126] manager: (tap867ab8e9-18): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 26 17:23:39 compute-0 kernel: tap867ab8e9-18: entered promiscuous mode
Jan 26 17:23:39 compute-0 ovn_controller[97699]: 2026-01-26T17:23:39Z|00119|binding|INFO|Claiming lport 867ab8e9-18b5-4260-b370-f39c517ff96b for this chassis.
Jan 26 17:23:39 compute-0 ovn_controller[97699]: 2026-01-26T17:23:39Z|00120|binding|INFO|867ab8e9-18b5-4260-b370-f39c517ff96b: Claiming fa:16:3e:52:a5:be 10.100.0.3
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.324 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 ovn_controller[97699]: 2026-01-26T17:23:39Z|00121|binding|INFO|Setting lport 867ab8e9-18b5-4260-b370-f39c517ff96b ovn-installed in OVS
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.349 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.347 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a5:be 10.100.0.3'], port_security=['fa:16:3e:52:a5:be 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69a46725-8a69-43b6-a3bc-615971d6f0df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff6e46591ae14b9183698121bab3d554', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf31bed3-811e-434b-9167-852691f7b3ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a769a86d-cd60-4fd6-82fe-fe13dcd97313, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=867ab8e9-18b5-4260-b370-f39c517ff96b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.349 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 867ab8e9-18b5-4260-b370-f39c517ff96b in datapath 87c81880-3494-4deb-b3df-3b6a60ff84ca bound to our chassis
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.352 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87c81880-3494-4deb-b3df-3b6a60ff84ca
Jan 26 17:23:39 compute-0 ovn_controller[97699]: 2026-01-26T17:23:39Z|00122|binding|INFO|Setting lport 867ab8e9-18b5-4260-b370-f39c517ff96b up in Southbound
Jan 26 17:23:39 compute-0 systemd-udevd[257867]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.371 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9856fc-dc66-46e8-b5ea-0b793e1c8270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.372 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87c81880-31 in ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.375 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87c81880-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.375 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8902a7-768c-4047-afb4-1a241ae619ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.378 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddbeff4-1259-4869-8e59-1d33c9160d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 systemd-machined[156679]: New machine qemu-13-instance-0000000c.
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.391 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[4d11b4a7-3525-4dd0-ba00-d105f6dcb001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.3957] device (tap867ab8e9-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.3964] device (tap867ab8e9-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:23:39 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Jan 26 17:23:39 compute-0 podman[257827]: 2026-01-26 17:23:39.420311229 +0000 UTC m=+0.145651215 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.422 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[30d735d4-2798-43de-a334-1fff65158a0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 podman[257825]: 2026-01-26 17:23:39.438660597 +0000 UTC m=+0.172495344 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 26 17:23:39 compute-0 podman[257826]: 2026-01-26 17:23:39.491191124 +0000 UTC m=+0.220165049 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.500 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b149ce-8ff4-4fd1-a318-85f57977604a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.529 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[30361673-fd36-49b6-94a5-7623971b2674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.5318] manager: (tap87c81880-30): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.582 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e7cbb6-3c1b-4d7a-a324-7ef396dd49f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.602 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6c4813-cb11-4406-923b-969887d618ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.6271] device (tap87c81880-30): carrier: link connected
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.632 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[81dbffba-cc6c-4690-b3c9-6d8dd1ed3894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.652 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[12fc5273-5e36-4411-ae35-a3fa62692c6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c81880-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:b8:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686812, 'reachable_time': 28646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257925, 'error': None, 'target': 'ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.668 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[9b62bf32-8576-4aba-93dd-cc98cf63aaf2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:b801'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686812, 'tstamp': 686812}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257926, 'error': None, 'target': 'ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.684 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f78bdfc3-d4d7-4068-ab98-6d6e44b76c77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c81880-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:b8:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686812, 'reachable_time': 28646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257927, 'error': None, 'target': 'ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.716 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0befc5eb-8567-40f0-bcdf-fc7af0abfcf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.774 185393 DEBUG nova.compute.manager [req-24699a19-0b3c-46b1-aaf2-63426b5d1bce req-496a4f61-c3c2-4175-bf67-d9864ac48ed0 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.776 185393 DEBUG oslo_concurrency.lockutils [req-24699a19-0b3c-46b1-aaf2-63426b5d1bce req-496a4f61-c3c2-4175-bf67-d9864ac48ed0 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.776 185393 DEBUG oslo_concurrency.lockutils [req-24699a19-0b3c-46b1-aaf2-63426b5d1bce req-496a4f61-c3c2-4175-bf67-d9864ac48ed0 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.777 185393 DEBUG oslo_concurrency.lockutils [req-24699a19-0b3c-46b1-aaf2-63426b5d1bce req-496a4f61-c3c2-4175-bf67-d9864ac48ed0 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.777 185393 DEBUG nova.compute.manager [req-24699a19-0b3c-46b1-aaf2-63426b5d1bce req-496a4f61-c3c2-4175-bf67-d9864ac48ed0 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Processing event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.778 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[de416d78-cf3a-4b2a-83b2-e5cfce788b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.781 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c81880-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.781 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.782 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87c81880-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.786 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 kernel: tap87c81880-30: entered promiscuous mode
Jan 26 17:23:39 compute-0 NetworkManager[56253]: <info>  [1769448219.7871] manager: (tap87c81880-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.791 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87c81880-30, col_values=(('external_ids', {'iface-id': '5f133853-1e22-4df1-be59-98e7246ddbc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.793 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 ovn_controller[97699]: 2026-01-26T17:23:39Z|00123|binding|INFO|Releasing lport 5f133853-1e22-4df1-be59-98e7246ddbc0 from this chassis (sb_readonly=0)
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.805 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 nova_compute[185389]: 2026-01-26 17:23:39.807 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.808 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87c81880-3494-4deb-b3df-3b6a60ff84ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87c81880-3494-4deb-b3df-3b6a60ff84ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.809 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[75bd18ae-5b98-4d5b-9669-cfe2c03e832f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.810 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-87c81880-3494-4deb-b3df-3b6a60ff84ca
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/87c81880-3494-4deb-b3df-3b6a60ff84ca.pid.haproxy
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 87c81880-3494-4deb-b3df-3b6a60ff84ca
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:23:39 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:39.811 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'env', 'PROCESS_TAG=haproxy-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87c81880-3494-4deb-b3df-3b6a60ff84ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.038 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.048 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448220.0394442, 69a46725-8a69-43b6-a3bc-615971d6f0df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.049 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] VM Started (Lifecycle Event)
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.060 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.070 185393 INFO nova.virt.libvirt.driver [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Instance spawned successfully.
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.070 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.094 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.101 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.105 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.106 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.107 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.107 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.108 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.108 185393 DEBUG nova.virt.libvirt.driver [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.152 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.153 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448220.0395749, 69a46725-8a69-43b6-a3bc-615971d6f0df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.154 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] VM Paused (Lifecycle Event)
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.186 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.196 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448220.05419, 69a46725-8a69-43b6-a3bc-615971d6f0df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.197 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] VM Resumed (Lifecycle Event)
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.205 185393 INFO nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Took 13.19 seconds to spawn the instance on the hypervisor.
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.206 185393 DEBUG nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.218 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.222 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.251 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.298 185393 INFO nova.compute.manager [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Took 13.86 seconds to build instance.
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.342 185393 DEBUG oslo_concurrency.lockutils [None req-aad7b5b5-7b87-45c7-9177-f331fcc90a6e 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:40 compute-0 podman[257965]: 2026-01-26 17:23:40.29486049 +0000 UTC m=+0.036668777 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:23:40 compute-0 nova_compute[185389]: 2026-01-26 17:23:40.904 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:41 compute-0 podman[257965]: 2026-01-26 17:23:41.23941525 +0000 UTC m=+0.981223537 container create efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 17:23:41 compute-0 systemd[1]: Started libpod-conmon-efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019.scope.
Jan 26 17:23:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ba4ebec28a767e51f481b7f23662f171d534bc5b9974f5bad369d1ed0ec81e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:41 compute-0 podman[257975]: 2026-01-26 17:23:41.831228885 +0000 UTC m=+0.550293428 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.930 185393 DEBUG nova.compute.manager [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.931 185393 DEBUG oslo_concurrency.lockutils [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.932 185393 DEBUG oslo_concurrency.lockutils [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.933 185393 DEBUG oslo_concurrency.lockutils [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.933 185393 DEBUG nova.compute.manager [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] No waiting events found dispatching network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:41 compute-0 nova_compute[185389]: 2026-01-26 17:23:41.934 185393 WARNING nova.compute.manager [req-55138b41-3469-4268-825f-3108e0b4f676 req-596b3680-e6f4-44d0-8db1-a3b9261ad906 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received unexpected event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b for instance with vm_state active and task_state None.
Jan 26 17:23:41 compute-0 podman[257965]: 2026-01-26 17:23:41.958697736 +0000 UTC m=+1.700506053 container init efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 17:23:41 compute-0 podman[257965]: 2026-01-26 17:23:41.969235121 +0000 UTC m=+1.711043408 container start efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:23:42 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [NOTICE]   (258007) : New worker (258009) forked
Jan 26 17:23:42 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [NOTICE]   (258007) : Loading success.
Jan 26 17:23:42 compute-0 nova_compute[185389]: 2026-01-26 17:23:42.322 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:42 compute-0 nova_compute[185389]: 2026-01-26 17:23:42.758 185393 DEBUG nova.network.neutron [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updated VIF entry in instance network info cache for port 867ab8e9-18b5-4260-b370-f39c517ff96b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:42 compute-0 nova_compute[185389]: 2026-01-26 17:23:42.759 185393 DEBUG nova.network.neutron [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updating instance_info_cache with network_info: [{"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:42 compute-0 nova_compute[185389]: 2026-01-26 17:23:42.789 185393 DEBUG oslo_concurrency.lockutils [req-55a8b1ee-4f6e-4859-9050-16df971f859a req-82615e0a-4035-4ff5-8d4c-a4eec42d8117 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:43 compute-0 podman[258018]: 2026-01-26 17:23:43.213765106 +0000 UTC m=+0.109587577 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 17:23:43 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:43.386 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:44 compute-0 nova_compute[185389]: 2026-01-26 17:23:44.295 185393 DEBUG nova.network.neutron [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updated VIF entry in instance network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:44 compute-0 nova_compute[185389]: 2026-01-26 17:23:44.297 185393 DEBUG nova.network.neutron [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:44 compute-0 nova_compute[185389]: 2026-01-26 17:23:44.317 185393 DEBUG oslo_concurrency.lockutils [req-bb01be38-9485-4bed-846d-c286a283a000 req-4fa87ba0-0eb0-4ebd-83d2-ddfd935c5624 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:45 compute-0 nova_compute[185389]: 2026-01-26 17:23:45.332 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:45 compute-0 nova_compute[185389]: 2026-01-26 17:23:45.907 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:46 compute-0 nova_compute[185389]: 2026-01-26 17:23:46.907 185393 DEBUG nova.compute.manager [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-changed-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:46 compute-0 nova_compute[185389]: 2026-01-26 17:23:46.909 185393 DEBUG nova.compute.manager [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Refreshing instance network info cache due to event network-changed-867ab8e9-18b5-4260-b370-f39c517ff96b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:23:46 compute-0 nova_compute[185389]: 2026-01-26 17:23:46.909 185393 DEBUG oslo_concurrency.lockutils [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:23:46 compute-0 nova_compute[185389]: 2026-01-26 17:23:46.910 185393 DEBUG oslo_concurrency.lockutils [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:23:46 compute-0 nova_compute[185389]: 2026-01-26 17:23:46.910 185393 DEBUG nova.network.neutron [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Refreshing network info cache for port 867ab8e9-18b5-4260-b370-f39c517ff96b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:23:47 compute-0 podman[258038]: 2026-01-26 17:23:47.197394824 +0000 UTC m=+0.076870808 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 17:23:47 compute-0 podman[258037]: 2026-01-26 17:23:47.208157625 +0000 UTC m=+0.094575837 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:23:47 compute-0 podman[258036]: 2026-01-26 17:23:47.250025662 +0000 UTC m=+0.136670560 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 17:23:47 compute-0 nova_compute[185389]: 2026-01-26 17:23:47.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.620 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.621 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.622 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.622 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.623 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.624 185393 INFO nova.compute.manager [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Terminating instance
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.625 185393 DEBUG nova.compute.manager [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:23:48 compute-0 kernel: tap867ab8e9-18 (unregistering): left promiscuous mode
Jan 26 17:23:48 compute-0 NetworkManager[56253]: <info>  [1769448228.6626] device (tap867ab8e9-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.681 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 ovn_controller[97699]: 2026-01-26T17:23:48Z|00124|binding|INFO|Releasing lport 867ab8e9-18b5-4260-b370-f39c517ff96b from this chassis (sb_readonly=0)
Jan 26 17:23:48 compute-0 ovn_controller[97699]: 2026-01-26T17:23:48Z|00125|binding|INFO|Setting lport 867ab8e9-18b5-4260-b370-f39c517ff96b down in Southbound
Jan 26 17:23:48 compute-0 ovn_controller[97699]: 2026-01-26T17:23:48Z|00126|binding|INFO|Removing iface tap867ab8e9-18 ovn-installed in OVS
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.686 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.702 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 26 17:23:48 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 9.280s CPU time.
Jan 26 17:23:48 compute-0 systemd-machined[156679]: Machine qemu-13-instance-0000000c terminated.
Jan 26 17:23:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:48.775 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a5:be 10.100.0.3'], port_security=['fa:16:3e:52:a5:be 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69a46725-8a69-43b6-a3bc-615971d6f0df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff6e46591ae14b9183698121bab3d554', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf31bed3-811e-434b-9167-852691f7b3ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a769a86d-cd60-4fd6-82fe-fe13dcd97313, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=867ab8e9-18b5-4260-b370-f39c517ff96b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:23:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:48.777 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 867ab8e9-18b5-4260-b370-f39c517ff96b in datapath 87c81880-3494-4deb-b3df-3b6a60ff84ca unbound from our chassis
Jan 26 17:23:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:48.779 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87c81880-3494-4deb-b3df-3b6a60ff84ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:23:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:48.780 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[931b2a0e-bdc6-4978-be60-6e3dd2fb8dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:48 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:48.781 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca namespace which is not needed anymore
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.854 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.860 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.903 185393 INFO nova.virt.libvirt.driver [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Instance destroyed successfully.
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.903 185393 DEBUG nova.objects.instance [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lazy-loading 'resources' on Instance uuid 69a46725-8a69-43b6-a3bc-615971d6f0df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.926 185393 DEBUG nova.virt.libvirt.vif [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:23:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1917976453',display_name='tempest-ServersTestJSON-server-1917976453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1917976453',id=12,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDIIXWOAUU+wzgbCxTuZ1CgRTJmh6zVCSpX/ed86eRXUM4OBvjRpAPW6jrLl1JW/p7jyneusxoBg4x7+CE629CpwNm9y1Ynw3oRQYINFOZSrNmIngBP3qnxLcn75wwg+AA==',key_name='tempest-keypair-934613233',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:23:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff6e46591ae14b9183698121bab3d554',ramdisk_id='',reservation_id='r-ro8oi3lj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1750390716',owner_user_name='tempest-ServersTestJSON-1750390716-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ba2aac01dc64b1f9c69a2a78d95c6d5',uuid=69a46725-8a69-43b6-a3bc-615971d6f0df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.927 185393 DEBUG nova.network.os_vif_util [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converting VIF {"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.929 185393 DEBUG nova.network.os_vif_util [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.930 185393 DEBUG os_vif [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.932 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.933 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867ab8e9-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.935 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.938 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.940 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.944 185393 INFO os_vif [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a5:be,bridge_name='br-int',has_traffic_filtering=True,id=867ab8e9-18b5-4260-b370-f39c517ff96b,network=Network(87c81880-3494-4deb-b3df-3b6a60ff84ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap867ab8e9-18')
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.946 185393 INFO nova.virt.libvirt.driver [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Deleting instance files /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df_del
Jan 26 17:23:48 compute-0 nova_compute[185389]: 2026-01-26 17:23:48.947 185393 INFO nova.virt.libvirt.driver [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Deletion of /var/lib/nova/instances/69a46725-8a69-43b6-a3bc-615971d6f0df_del complete
Jan 26 17:23:49 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [NOTICE]   (258007) : haproxy version is 2.8.14-c23fe91
Jan 26 17:23:49 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [NOTICE]   (258007) : path to executable is /usr/sbin/haproxy
Jan 26 17:23:49 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [WARNING]  (258007) : Exiting Master process...
Jan 26 17:23:49 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [ALERT]    (258007) : Current worker (258009) exited with code 143 (Terminated)
Jan 26 17:23:49 compute-0 neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca[257992]: [WARNING]  (258007) : All workers exited. Exiting... (0)
Jan 26 17:23:49 compute-0 systemd[1]: libpod-efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019.scope: Deactivated successfully.
Jan 26 17:23:49 compute-0 podman[258136]: 2026-01-26 17:23:49.015879618 +0000 UTC m=+0.071712848 container died efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.027 185393 INFO nova.compute.manager [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.028 185393 DEBUG oslo.service.loopingcall [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.029 185393 DEBUG nova.compute.manager [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.029 185393 DEBUG nova.network.neutron [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019-userdata-shm.mount: Deactivated successfully.
Jan 26 17:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-52ba4ebec28a767e51f481b7f23662f171d534bc5b9974f5bad369d1ed0ec81e-merged.mount: Deactivated successfully.
Jan 26 17:23:49 compute-0 podman[258136]: 2026-01-26 17:23:49.072376462 +0000 UTC m=+0.128209602 container cleanup efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 17:23:49 compute-0 systemd[1]: libpod-conmon-efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019.scope: Deactivated successfully.
Jan 26 17:23:49 compute-0 podman[258161]: 2026-01-26 17:23:49.178919024 +0000 UTC m=+0.082181662 container remove efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.210 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e10094db-2f93-4674-84b2-b766a3a09b25]: (4, ('Mon Jan 26 05:23:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca (efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019)\nefc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019\nMon Jan 26 05:23:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca (efc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019)\nefc33cb7ee605810b537271925b6fd37a4d4f52226a858acb0de2734e1d9c019\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.213 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[14e62c5f-473f-4d4f-8d9a-877db92808c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.214 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c81880-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:49 compute-0 kernel: tap87c81880-30: left promiscuous mode
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.228 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cceeb2be-2555-4ce1-a1c6-dc86660cde2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.234 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.246 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ae656a-a63c-4066-a1e0-0086c174da72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.250 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba32a3d-068e-4641-b9ea-bfc3451fd7d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.271 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6391366a-c499-4869-8b5d-ef60c934fbb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686799, 'reachable_time': 23724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258176, 'error': None, 'target': 'ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d87c81880\x2d3494\x2d4deb\x2db3df\x2d3b6a60ff84ca.mount: Deactivated successfully.
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.275 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87c81880-3494-4deb-b3df-3b6a60ff84ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:23:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:23:49.275 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[204e03f6-8f6f-4f49-a6a7-5e469b1f0b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.353 185393 DEBUG nova.compute.manager [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-unplugged-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.354 185393 DEBUG oslo_concurrency.lockutils [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.355 185393 DEBUG oslo_concurrency.lockutils [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.355 185393 DEBUG oslo_concurrency.lockutils [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.355 185393 DEBUG nova.compute.manager [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] No waiting events found dispatching network-vif-unplugged-867ab8e9-18b5-4260-b370-f39c517ff96b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:49 compute-0 nova_compute[185389]: 2026-01-26 17:23:49.356 185393 DEBUG nova.compute.manager [req-24138ab7-aa42-4f45-8cbf-03a30cc30133 req-5dc5ad1d-5777-4ada-b263-c0559732e1f4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-unplugged-867ab8e9-18b5-4260-b370-f39c517ff96b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:23:50 compute-0 nova_compute[185389]: 2026-01-26 17:23:50.155 185393 DEBUG nova.network.neutron [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updated VIF entry in instance network info cache for port 867ab8e9-18b5-4260-b370-f39c517ff96b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:23:50 compute-0 nova_compute[185389]: 2026-01-26 17:23:50.155 185393 DEBUG nova.network.neutron [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updating instance_info_cache with network_info: [{"id": "867ab8e9-18b5-4260-b370-f39c517ff96b", "address": "fa:16:3e:52:a5:be", "network": {"id": "87c81880-3494-4deb-b3df-3b6a60ff84ca", "bridge": "br-int", "label": "tempest-ServersTestJSON-1079224151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff6e46591ae14b9183698121bab3d554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867ab8e9-18", "ovs_interfaceid": "867ab8e9-18b5-4260-b370-f39c517ff96b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:50 compute-0 nova_compute[185389]: 2026-01-26 17:23:50.182 185393 DEBUG oslo_concurrency.lockutils [req-6c7083de-b1db-43d7-a6e6-8a75afb2d119 req-ebe5c036-c8ba-4220-847f-f5b2c80a7ee1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-69a46725-8a69-43b6-a3bc-615971d6f0df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:23:50 compute-0 ovn_controller[97699]: 2026-01-26T17:23:50Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:ea:64 10.100.0.5
Jan 26 17:23:50 compute-0 nova_compute[185389]: 2026-01-26 17:23:50.912 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.135 185393 DEBUG nova.compute.manager [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.136 185393 DEBUG oslo_concurrency.lockutils [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.136 185393 DEBUG oslo_concurrency.lockutils [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.137 185393 DEBUG oslo_concurrency.lockutils [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.137 185393 DEBUG nova.compute.manager [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] No waiting events found dispatching network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.137 185393 WARNING nova.compute.manager [req-35dcda68-532f-43f8-9824-91db60ad99c4 req-4adceff2-d4f0-42f7-b801-a29d81e6de1e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received unexpected event network-vif-plugged-867ab8e9-18b5-4260-b370-f39c517ff96b for instance with vm_state active and task_state deleting.
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.364 185393 DEBUG nova.network.neutron [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.384 185393 INFO nova.compute.manager [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Took 3.36 seconds to deallocate network for instance.
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.435 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.436 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.545 185393 DEBUG nova.compute.provider_tree [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.560 185393 DEBUG nova.scheduler.client.report [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.582 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.636 185393 INFO nova.scheduler.client.report [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Deleted allocations for instance 69a46725-8a69-43b6-a3bc-615971d6f0df
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.718 185393 DEBUG oslo_concurrency.lockutils [None req-f7ad33b2-2c56-4572-856c-99f0cb8ba0fc 1ba2aac01dc64b1f9c69a2a78d95c6d5 ff6e46591ae14b9183698121bab3d554 - - default default] Lock "69a46725-8a69-43b6-a3bc-615971d6f0df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:52 compute-0 nova_compute[185389]: 2026-01-26 17:23:52.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:53 compute-0 nova_compute[185389]: 2026-01-26 17:23:53.579 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:53 compute-0 nova_compute[185389]: 2026-01-26 17:23:53.936 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:54 compute-0 nova_compute[185389]: 2026-01-26 17:23:54.255 185393 DEBUG nova.compute.manager [req-9d2723f2-5a03-4fba-9f9d-30786fa8afb7 req-f18e4ea7-9a07-4be2-805d-070e66ffe30a 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Received event network-vif-deleted-867ab8e9-18b5-4260-b370-f39c517ff96b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:23:55 compute-0 nova_compute[185389]: 2026-01-26 17:23:55.919 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:58 compute-0 ovn_controller[97699]: 2026-01-26T17:23:58Z|00127|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:23:58 compute-0 ovn_controller[97699]: 2026-01-26T17:23:58Z|00128|binding|INFO|Releasing lport d58b7d53-5cc1-4ed8-aa06-162121fd1800 from this chassis (sb_readonly=0)
Jan 26 17:23:58 compute-0 nova_compute[185389]: 2026-01-26 17:23:58.511 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:58 compute-0 nova_compute[185389]: 2026-01-26 17:23:58.940 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:59 compute-0 podman[201244]: time="2026-01-26T17:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:23:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:23:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Jan 26 17:24:00 compute-0 nova_compute[185389]: 2026-01-26 17:24:00.919 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:01 compute-0 openstack_network_exporter[204387]: ERROR   17:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:24:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:24:01 compute-0 openstack_network_exporter[204387]: ERROR   17:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:24:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:01.779 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:01.780 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:01.781 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:03 compute-0 nova_compute[185389]: 2026-01-26 17:24:03.902 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448228.9008505, 69a46725-8a69-43b6-a3bc-615971d6f0df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:03 compute-0 nova_compute[185389]: 2026-01-26 17:24:03.904 185393 INFO nova.compute.manager [-] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] VM Stopped (Lifecycle Event)
Jan 26 17:24:03 compute-0 nova_compute[185389]: 2026-01-26 17:24:03.937 185393 DEBUG nova.compute.manager [None req-d3d41554-aa79-4850-98ed-20b8b2a073fc - - - - - -] [instance: 69a46725-8a69-43b6-a3bc-615971d6f0df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:03 compute-0 nova_compute[185389]: 2026-01-26 17:24:03.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:05 compute-0 ovn_controller[97699]: 2026-01-26T17:24:05Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:71:2d 10.100.0.13
Jan 26 17:24:05 compute-0 ovn_controller[97699]: 2026-01-26T17:24:05Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:71:2d 10.100.0.13
Jan 26 17:24:05 compute-0 nova_compute[185389]: 2026-01-26 17:24:05.921 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:08 compute-0 nova_compute[185389]: 2026-01-26 17:24:08.684 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:08 compute-0 nova_compute[185389]: 2026-01-26 17:24:08.686 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:08 compute-0 nova_compute[185389]: 2026-01-26 17:24:08.730 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:24:08 compute-0 nova_compute[185389]: 2026-01-26 17:24:08.948 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.262 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.263 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.279 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.280 185393 INFO nova.compute.claims [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.644 185393 DEBUG nova.compute.provider_tree [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.674 185393 DEBUG nova.scheduler.client.report [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.711 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.712 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.770 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.771 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.795 185393 INFO nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.824 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.996 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.998 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:24:09 compute-0 nova_compute[185389]: 2026-01-26 17:24:09.999 185393 INFO nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Creating image(s)
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.000 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.000 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.001 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.002 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "ce93f468e93236574b5210325f2425f113a33d3d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.003 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:10 compute-0 podman[258196]: 2026-01-26 17:24:10.223452451 +0000 UTC m=+0.075546791 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:24:10 compute-0 podman[258195]: 2026-01-26 17:24:10.259583782 +0000 UTC m=+0.112214827 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 17:24:10 compute-0 podman[258194]: 2026-01-26 17:24:10.26503969 +0000 UTC m=+0.113701908 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.382 185393 DEBUG nova.policy [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '237a863555d84bd386855d9cf781beb4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.433 185393 INFO nova.compute.manager [None req-4bc1c5af-dbbf-4105-af28-fd3c90e85d27 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Get console output
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.503 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.537 238630 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 17:24:10 compute-0 nova_compute[185389]: 2026-01-26 17:24:10.924 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:12 compute-0 podman[258256]: 2026-01-26 17:24:12.18480165 +0000 UTC m=+0.070450214 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:24:12 compute-0 nova_compute[185389]: 2026-01-26 17:24:12.615 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Successfully created port: 4ea974be-d995-4c0f-bbcd-7a1410b167d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:24:12 compute-0 nova_compute[185389]: 2026-01-26 17:24:12.775 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.187 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.263 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.part --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.265 185393 DEBUG nova.virt.images [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] a3153c85-d830-4fd6-8cd6-1a69e6723a9e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.267 185393 DEBUG nova.privsep.utils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.267 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.part /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.798 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.part /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.converted" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.804 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.868 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.869 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.885 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.949 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.950 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "ce93f468e93236574b5210325f2425f113a33d3d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.951 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.965 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:13 compute-0 nova_compute[185389]: 2026-01-26 17:24:13.981 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.025 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.026 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d,backing_fmt=raw /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.150 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d,backing_fmt=raw /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk 1073741824" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.151 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.152 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:14 compute-0 podman[258302]: 2026-01-26 17:24:14.208127293 +0000 UTC m=+0.080419195 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.215 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.216 185393 DEBUG nova.virt.disk.api [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Checking if we can resize image /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.217 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.279 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.281 185393 DEBUG nova.virt.disk.api [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Cannot resize image /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.281 185393 DEBUG nova.objects.instance [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'migration_context' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.317 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.318 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Ensure instance console log exists: /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.318 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.319 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.319 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.506 185393 DEBUG nova.compute.manager [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.507 185393 DEBUG nova.compute.manager [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing instance network info cache due to event network-changed-994f4b51-014f-469e-9096-4ffe2dafa019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.507 185393 DEBUG oslo_concurrency.lockutils [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.508 185393 DEBUG oslo_concurrency.lockutils [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:14 compute-0 nova_compute[185389]: 2026-01-26 17:24:14.508 185393 DEBUG nova.network.neutron [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Refreshing network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.141 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Successfully updated port: 4ea974be-d995-4c0f-bbcd-7a1410b167d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.164 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.164 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.164 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.417 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:24:15 compute-0 nova_compute[185389]: 2026-01-26 17:24:15.928 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.217 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.659 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.660 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.660 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.661 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.681 185393 DEBUG nova.compute.manager [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-changed-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.682 185393 DEBUG nova.compute.manager [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Refreshing instance network info cache due to event network-changed-4ea974be-d995-4c0f-bbcd-7a1410b167d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:24:16 compute-0 nova_compute[185389]: 2026-01-26 17:24:16.682 185393 DEBUG oslo_concurrency.lockutils [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:18 compute-0 podman[258329]: 2026-01-26 17:24:18.207451484 +0000 UTC m=+0.085376149 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, config_id=kepler)
Jan 26 17:24:18 compute-0 podman[258328]: 2026-01-26 17:24:18.207555676 +0000 UTC m=+0.090759124 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:24:18 compute-0 nova_compute[185389]: 2026-01-26 17:24:18.234 185393 DEBUG nova.network.neutron [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updated VIF entry in instance network info cache for port 994f4b51-014f-469e-9096-4ffe2dafa019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:24:18 compute-0 nova_compute[185389]: 2026-01-26 17:24:18.236 185393 DEBUG nova.network.neutron [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:18 compute-0 podman[258327]: 2026-01-26 17:24:18.239369011 +0000 UTC m=+0.126578418 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 17:24:18 compute-0 nova_compute[185389]: 2026-01-26 17:24:18.268 185393 DEBUG oslo_concurrency.lockutils [req-ff011ee1-ecce-4a9f-944d-ab366447a648 req-70aa58c3-354a-442a-b061-cd690f418338 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:18 compute-0 nova_compute[185389]: 2026-01-26 17:24:18.985 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.144 185393 DEBUG nova.network.neutron [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.812 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.813 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Instance network_info: |[{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.813 185393 DEBUG oslo_concurrency.lockutils [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.814 185393 DEBUG nova.network.neutron [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Refreshing network info cache for port 4ea974be-d995-4c0f-bbcd-7a1410b167d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.817 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Start _get_guest_xml network_info=[{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:23:56Z,direct_url=<?>,disk_format='qcow2',id=a3153c85-d830-4fd6-8cd6-1a69e6723a9e,min_disk=0,min_ram=0,name='tempest-scenario-img--1989180608',owner='237a863555d84bd386855d9cf781beb4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:23:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.825 185393 WARNING nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.834 185393 DEBUG nova.virt.libvirt.host [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.835 185393 DEBUG nova.virt.libvirt.host [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.841 185393 DEBUG nova.virt.libvirt.host [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.842 185393 DEBUG nova.virt.libvirt.host [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.843 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.844 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:23:56Z,direct_url=<?>,disk_format='qcow2',id=a3153c85-d830-4fd6-8cd6-1a69e6723a9e,min_disk=0,min_ram=0,name='tempest-scenario-img--1989180608',owner='237a863555d84bd386855d9cf781beb4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:23:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.845 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.845 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.846 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.847 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.847 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.848 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.848 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.849 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.849 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.849 185393 DEBUG nova.virt.hardware [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.854 185393 DEBUG nova.virt.libvirt.vif [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:24:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',id=13,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-o33mgm0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:24:09Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=f9b0315f-2a3c-471e-b629-b19d90a40a97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.855 185393 DEBUG nova.network.os_vif_util [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.856 185393 DEBUG nova.network.os_vif_util [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.857 185393 DEBUG nova.objects.instance [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.874 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <uuid>f9b0315f-2a3c-471e-b629-b19d90a40a97</uuid>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <name>instance-0000000d</name>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:name>te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c</nova:name>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:24:19</nova:creationTime>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:user uuid="5ca35c18e54b493f9efdfe2218cce3c7">tempest-PrometheusGabbiTest-2035201521-project-member</nova:user>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:project uuid="237a863555d84bd386855d9cf781beb4">tempest-PrometheusGabbiTest-2035201521</nova:project>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="a3153c85-d830-4fd6-8cd6-1a69e6723a9e"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         <nova:port uuid="4ea974be-d995-4c0f-bbcd-7a1410b167d8">
Jan 26 17:24:19 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.3.123" ipVersion="4"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <system>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="serial">f9b0315f-2a3c-471e-b629-b19d90a40a97</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="uuid">f9b0315f-2a3c-471e-b629-b19d90a40a97</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </system>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <os>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </os>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <features>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </features>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.config"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:ea:9e:d9"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <target dev="tap4ea974be-d9"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/console.log" append="off"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <video>
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </video>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:24:19 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:24:19 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:24:19 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:24:19 compute-0 nova_compute[185389]: </domain>
Jan 26 17:24:19 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.876 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Preparing to wait for external event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.876 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.876 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.877 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.877 185393 DEBUG nova.virt.libvirt.vif [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:24:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',id=13,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-o33mgm0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:24:09Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=f9b0315f-2a3c-471e-b629-b19d90a40a97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.878 185393 DEBUG nova.network.os_vif_util [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.878 185393 DEBUG nova.network.os_vif_util [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.879 185393 DEBUG os_vif [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.879 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.879 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.880 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.884 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.884 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ea974be-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.885 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ea974be-d9, col_values=(('external_ids', {'iface-id': '4ea974be-d995-4c0f-bbcd-7a1410b167d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:9e:d9', 'vm-uuid': 'f9b0315f-2a3c-471e-b629-b19d90a40a97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.887 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:19 compute-0 NetworkManager[56253]: <info>  [1769448259.8893] manager: (tap4ea974be-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.891 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.899 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.900 185393 INFO os_vif [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9')
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.982 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.983 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.983 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No VIF found with MAC fa:16:3e:ea:9e:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:24:19 compute-0 nova_compute[185389]: 2026-01-26 17:24:19.983 185393 INFO nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Using config drive
Jan 26 17:24:20 compute-0 nova_compute[185389]: 2026-01-26 17:24:20.930 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.586 185393 INFO nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Creating config drive at /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.config
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.602 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppuy6vgdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.728 185393 DEBUG oslo_concurrency.processutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppuy6vgdy" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:21 compute-0 kernel: tap4ea974be-d9: entered promiscuous mode
Jan 26 17:24:21 compute-0 NetworkManager[56253]: <info>  [1769448261.8083] manager: (tap4ea974be-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.808 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:21 compute-0 ovn_controller[97699]: 2026-01-26T17:24:21Z|00129|binding|INFO|Claiming lport 4ea974be-d995-4c0f-bbcd-7a1410b167d8 for this chassis.
Jan 26 17:24:21 compute-0 ovn_controller[97699]: 2026-01-26T17:24:21Z|00130|binding|INFO|4ea974be-d995-4c0f-bbcd-7a1410b167d8: Claiming fa:16:3e:ea:9e:d9 10.100.3.123
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.815 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:21 compute-0 ovn_controller[97699]: 2026-01-26T17:24:21Z|00131|binding|INFO|Setting lport 4ea974be-d995-4c0f-bbcd-7a1410b167d8 ovn-installed in OVS
Jan 26 17:24:21 compute-0 nova_compute[185389]: 2026-01-26 17:24:21.850 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.854 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:9e:d9 10.100.3.123'], port_security=['fa:16:3e:ea:9e:d9 10.100.3.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.123/16', 'neutron:device_id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '237a863555d84bd386855d9cf781beb4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc68cb5f-1d27-40d0-8734-5af9ebb54c8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a60e9a2c-a4db-4b50-8dd7-bdfa9e915edf, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=4ea974be-d995-4c0f-bbcd-7a1410b167d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:24:21 compute-0 ovn_controller[97699]: 2026-01-26T17:24:21Z|00132|binding|INFO|Setting lport 4ea974be-d995-4c0f-bbcd-7a1410b167d8 up in Southbound
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.855 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 4ea974be-d995-4c0f-bbcd-7a1410b167d8 in datapath ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f bound to our chassis
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.857 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f
Jan 26 17:24:21 compute-0 systemd-machined[156679]: New machine qemu-14-instance-0000000d.
Jan 26 17:24:21 compute-0 systemd-udevd[258414]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:24:21 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.875 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5647b9-4143-4e93-8e83-9244faacecac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.877 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad47c1ee-d1 in ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.880 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad47c1ee-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.880 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[ff480fca-428e-4d9e-8f94-c7d48d7e9cc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.881 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0695b73b-6f93-4d5a-ba69-045067b35409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 NetworkManager[56253]: <info>  [1769448261.8865] device (tap4ea974be-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:24:21 compute-0 NetworkManager[56253]: <info>  [1769448261.8907] device (tap4ea974be-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.898 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec62082-443c-4fd1-9179-b37c2b8755de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.916 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[69016491-8e6a-49d3-9dce-78ace59a081d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.956 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[22805ca7-9ceb-4c98-b76b-a3fcba7d972d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 systemd-udevd[258416]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:24:21 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:21.965 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[634c35b8-7d91-43cf-9269-163ca1ea9e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:21 compute-0 NetworkManager[56253]: <info>  [1769448261.9671] manager: (tapad47c1ee-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.009 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[166961b3-b46c-4f07-a3c6-e20bf03d4d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.014 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[98bf4448-863d-4d2d-83fa-e5366c9aa400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 NetworkManager[56253]: <info>  [1769448262.0462] device (tapad47c1ee-d0): carrier: link connected
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.052 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff695041-f772-414c-bc4e-788dfbd76945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.071 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[dd337aed-3b24-486b-8302-dd9406b45d61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad47c1ee-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:d4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691054, 'reachable_time': 16319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258446, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.086 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [{"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.091 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[23d66a26-e404-45d5-af4e-af5b482c3601]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:d474'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691054, 'tstamp': 691054}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258448, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.115 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[331e29f3-5e6d-4146-a6d9-5e155b7b38c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad47c1ee-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:d4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691054, 'reachable_time': 16319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258453, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.148 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-186e87cb-beb9-48df-8b10-dfc5c8afe996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.149 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.149 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.149 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.150 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.152 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.152 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.153 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.153 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.153 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.155 185393 INFO nova.compute.manager [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Terminating instance
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.156 185393 DEBUG nova.compute.manager [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.189 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[db36d362-503a-4d74-a045-aba8050f38c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.220 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448262.2193434, f9b0315f-2a3c-471e-b629-b19d90a40a97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.220 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] VM Started (Lifecycle Event)
Jan 26 17:24:22 compute-0 kernel: tap6e11a3e1-dc (unregistering): left promiscuous mode
Jan 26 17:24:22 compute-0 NetworkManager[56253]: <info>  [1769448262.2594] device (tap6e11a3e1-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.266 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[75d59ddf-46c6-48fd-9c1e-3ac95fb28f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.269 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad47c1ee-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.269 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.269 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad47c1ee-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:22 compute-0 NetworkManager[56253]: <info>  [1769448262.2737] manager: (tapad47c1ee-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.273 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 ovn_controller[97699]: 2026-01-26T17:24:22Z|00133|binding|INFO|Releasing lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 from this chassis (sb_readonly=0)
Jan 26 17:24:22 compute-0 ovn_controller[97699]: 2026-01-26T17:24:22Z|00134|binding|INFO|Setting lport 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 down in Southbound
Jan 26 17:24:22 compute-0 ovn_controller[97699]: 2026-01-26T17:24:22Z|00135|binding|INFO|Removing iface tap6e11a3e1-dc ovn-installed in OVS
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.281 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.287 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:22 compute-0 kernel: tapad47c1ee-d0: entered promiscuous mode
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.302 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448262.219515, f9b0315f-2a3c-471e-b629-b19d90a40a97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.302 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] VM Paused (Lifecycle Event)
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.304 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.305 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad47c1ee-d0, col_values=(('external_ids', {'iface-id': '072b84ed-db94-41f8-b8ae-79603b591704'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.306 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 ovn_controller[97699]: 2026-01-26T17:24:22Z|00136|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=1)
Jan 26 17:24:22 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 26 17:24:22 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000007.scope: Consumed 44.165s CPU time.
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 systemd-machined[156679]: Machine qemu-11-instance-00000007 terminated.
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.332 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.333 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.335 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[30a32c28-8adf-431e-ae6f-ed1d3aca49ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.336 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f.pid.haproxy
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:24:22 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:22.337 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'env', 'PROCESS_TAG=haproxy-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:24:22 compute-0 NetworkManager[56253]: <info>  [1769448262.3853] manager: (tap6e11a3e1-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.389 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.434 185393 INFO nova.virt.libvirt.driver [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Instance destroyed successfully.
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.435 185393 DEBUG nova.objects.instance [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lazy-loading 'resources' on Instance uuid 186e87cb-beb9-48df-8b10-dfc5c8afe996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.523 185393 DEBUG nova.network.neutron [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated VIF entry in instance network info cache for port 4ea974be-d995-4c0f-bbcd-7a1410b167d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:24:22 compute-0 nova_compute[185389]: 2026-01-26 17:24:22.524 185393 DEBUG nova.network.neutron [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:22 compute-0 podman[258508]: 2026-01-26 17:24:22.761099826 +0000 UTC m=+0.044402536 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:24:22 compute-0 podman[258508]: 2026-01-26 17:24:22.925662562 +0000 UTC m=+0.208965242 container create f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:22 compute-0 systemd[1]: Started libpod-conmon-f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8.scope.
Jan 26 17:24:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004a1f2e6a5973c77922b17c81b2560a93c0489232455d8429f76fca2518fa37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:23 compute-0 podman[258508]: 2026-01-26 17:24:23.056775241 +0000 UTC m=+0.340077931 container init f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 17:24:23 compute-0 podman[258508]: 2026-01-26 17:24:23.066496226 +0000 UTC m=+0.349798946 container start f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:23 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [NOTICE]   (258526) : New worker (258528) forked
Jan 26 17:24:23 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [NOTICE]   (258526) : Loading success.
Jan 26 17:24:23 compute-0 nova_compute[185389]: 2026-01-26 17:24:23.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.018 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ea:64 10.100.0.5'], port_security=['fa:16:3e:b3:ea:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '186e87cb-beb9-48df-8b10-dfc5c8afe996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b9ff6ad3012499db2eb0a82a1ccbcaa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34094d50-e876-4bbe-985c-d748419fede6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0b14c64-3c3f-4e5b-a736-e555c8460dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.020 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 in datapath 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac unbound from our chassis
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.022 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.023 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4ec7db-a8ad-44da-ace2-d5bbeef77ed3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.023 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac namespace which is not needed anymore
Jan 26 17:24:24 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [NOTICE]   (257493) : haproxy version is 2.8.14-c23fe91
Jan 26 17:24:24 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [NOTICE]   (257493) : path to executable is /usr/sbin/haproxy
Jan 26 17:24:24 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [WARNING]  (257493) : Exiting Master process...
Jan 26 17:24:24 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [ALERT]    (257493) : Current worker (257495) exited with code 143 (Terminated)
Jan 26 17:24:24 compute-0 neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac[257489]: [WARNING]  (257493) : All workers exited. Exiting... (0)
Jan 26 17:24:24 compute-0 systemd[1]: libpod-baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875.scope: Deactivated successfully.
Jan 26 17:24:24 compute-0 podman[258553]: 2026-01-26 17:24:24.247040782 +0000 UTC m=+0.081560045 container died baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875-userdata-shm.mount: Deactivated successfully.
Jan 26 17:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-514e8c7ba7a129be57292d6ade41d4de67ec50104c27738e43acebb517cb4b03-merged.mount: Deactivated successfully.
Jan 26 17:24:24 compute-0 podman[258553]: 2026-01-26 17:24:24.330507338 +0000 UTC m=+0.165026551 container cleanup baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:24:24 compute-0 systemd[1]: libpod-conmon-baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875.scope: Deactivated successfully.
Jan 26 17:24:24 compute-0 podman[258581]: 2026-01-26 17:24:24.442605061 +0000 UTC m=+0.077873755 container remove baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.453 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[858e219f-c38b-4337-9caa-f579f53f333d]: (4, ('Mon Jan 26 05:24:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac (baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875)\nbaf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875\nMon Jan 26 05:24:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac (baf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875)\nbaf82945f886178e9ae609aebb5810f45389bb01bcc69c99bb922028af955875\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.455 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2efb16cb-7053-47dd-83a0-95c7312d0c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.456 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a7c91d4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:24 compute-0 nova_compute[185389]: 2026-01-26 17:24:24.459 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:24 compute-0 kernel: tap4a7c91d4-b0: left promiscuous mode
Jan 26 17:24:24 compute-0 nova_compute[185389]: 2026-01-26 17:24:24.480 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:24 compute-0 nova_compute[185389]: 2026-01-26 17:24:24.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.484 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[87a6fa34-8b49-4310-99f3-e6b5636c1bb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.499 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[66bbe0d7-a4f7-4323-8237-5850c1ee5e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.501 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c70272-b833-42e1-b6dd-1d488803fe64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.518 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[67652e89-5f90-4808-9650-c1968ecd353b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684169, 'reachable_time': 40836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258599, 'error': None, 'target': 'ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.521 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:24:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a7c91d4\x2db0d3\x2d4f29\x2dad26\x2de78aa433d3ac.mount: Deactivated successfully.
Jan 26 17:24:24 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:24.521 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[c89f620f-f8ca-4f80-aa1a-59c02b0ed0cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:24 compute-0 nova_compute[185389]: 2026-01-26 17:24:24.888 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.192 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.194 185393 DEBUG nova.virt.libvirt.vif [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-34810632',display_name='tempest-ServerActionsTestJSON-server-34810632',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-34810632',id=7,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTXJeN/GiNVdk5tCK494xdfwd2oGU0rMaOXTgIR00PDsryTQP8qZXOiVkgunB3Q/QnB+t1PHKegnTlGoORFTNpKcXfSp02clner5iC0LHdkku2AHdsO52WWVjg3zvN4Sw==',key_name='tempest-keypair-288037080',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:21:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b9ff6ad3012499db2eb0a82a1ccbcaa',ramdisk_id='',reservation_id='r-evpcozau',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-254851137',owner_user_name='tempest-ServerActionsTestJSON-254851137-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6acd3be55c754b3dbf8ef6c0922b18ae',uuid=186e87cb-beb9-48df-8b10-dfc5c8afe996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.194 185393 DEBUG nova.network.os_vif_util [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converting VIF {"id": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "address": "fa:16:3e:b3:ea:64", "network": {"id": "4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1598418847-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b9ff6ad3012499db2eb0a82a1ccbcaa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e11a3e1-dc", "ovs_interfaceid": "6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.195 185393 DEBUG nova.network.os_vif_util [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.196 185393 DEBUG os_vif [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.200 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.201 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e11a3e1-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.203 185393 DEBUG oslo_concurrency.lockutils [req-c99b8c9b-c30c-4780-a111-cb0a25d3918f req-101664a1-e373-4568-923e-bc1b5df7d0ed 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.209 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.212 185393 INFO os_vif [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ea:64,bridge_name='br-int',has_traffic_filtering=True,id=6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3,network=Network(4a7c91d4-b0d3-4f29-ad26-e78aa433d3ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e11a3e1-dc')
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.213 185393 INFO nova.virt.libvirt.driver [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Deleting instance files /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996_del
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.214 185393 INFO nova.virt.libvirt.driver [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Deletion of /var/lib/nova/instances/186e87cb-beb9-48df-8b10-dfc5c8afe996_del complete
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.425 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.425 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.426 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.426 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.614 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.778 185393 INFO nova.compute.manager [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Took 3.62 seconds to destroy the instance on the hypervisor.
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.779 185393 DEBUG oslo.service.loopingcall [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.779 185393 DEBUG nova.compute.manager [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.780 185393 DEBUG nova.network.neutron [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.807 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.879 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.881 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.933 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.950 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.955 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Error from libvirt while getting description of instance-00000007: [Error Code 42] Domain not found: no domain with matching uuid '186e87cb-beb9-48df-8b10-dfc5c8afe996' (instance-00000007): libvirt.libvirtError: Domain not found: no domain with matching uuid '186e87cb-beb9-48df-8b10-dfc5c8afe996' (instance-00000007)
Jan 26 17:24:25 compute-0 nova_compute[185389]: 2026-01-26 17:24:25.961 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.041 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.043 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.112 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.561 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.564 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5071MB free_disk=72.31455612182617GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.564 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:26 compute-0 nova_compute[185389]: 2026-01-26 17:24:26.565 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.326 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance 186e87cb-beb9-48df-8b10-dfc5c8afe996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.326 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance cf6218c0-bc2c-4097-91df-f60657ef7ab1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.326 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.327 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.328 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.461 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:24:28 compute-0 nova_compute[185389]: 2026-01-26 17:24:28.749 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:24:29 compute-0 podman[201244]: time="2026-01-26T17:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:24:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:24:30 compute-0 podman[201244]: @ - - [26/Jan/2026:17:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.206 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.225 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.226 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.501 185393 DEBUG nova.network.neutron [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.533 185393 INFO nova.compute.manager [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Took 4.75 seconds to deallocate network for instance.
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.623 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.623 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.723 185393 DEBUG nova.compute.provider_tree [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.777 185393 DEBUG nova.scheduler.client.report [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.797 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.843 185393 INFO nova.scheduler.client.report [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Deleted allocations for instance 186e87cb-beb9-48df-8b10-dfc5c8afe996
Jan 26 17:24:30 compute-0 nova_compute[185389]: 2026-01-26 17:24:30.935 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:31 compute-0 nova_compute[185389]: 2026-01-26 17:24:31.027 185393 DEBUG oslo_concurrency.lockutils [None req-128bdcf3-9d05-42be-ba59-b999064ad808 6acd3be55c754b3dbf8ef6c0922b18ae 9b9ff6ad3012499db2eb0a82a1ccbcaa - - default default] Lock "186e87cb-beb9-48df-8b10-dfc5c8afe996" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:31 compute-0 nova_compute[185389]: 2026-01-26 17:24:31.031 185393 DEBUG nova.compute.manager [req-f85ebfdd-4ab7-45e7-978b-015d6d7593fc req-a53190c3-f51e-4d80-ae02-4eae8fc96ccd 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Received event network-vif-deleted-6e11a3e1-dccd-4fbd-92f1-cd2cd51302e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:31 compute-0 nova_compute[185389]: 2026-01-26 17:24:31.317 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.356 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.359 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.368 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cf6218c0-bc2c-4097-91df-f60657ef7ab1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.369 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.370 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cf6218c0-bc2c-4097-91df-f60657ef7ab1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.371 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.374 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.375 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.376 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.376 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.377 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.378 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.378 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.378 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:31.379 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:24:31 compute-0 openstack_network_exporter[204387]: ERROR   17:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:24:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:24:31 compute-0 openstack_network_exporter[204387]: ERROR   17:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:24:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.645 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1854 Content-Type: application/json Date: Mon, 26 Jan 2026 17:24:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-642532d7-fc7c-4317-8cb6-349a94a630ba x-openstack-request-id: req-642532d7-fc7c-4317-8cb6-349a94a630ba _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.645 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cf6218c0-bc2c-4097-91df-f60657ef7ab1", "name": "tempest-TestNetworkBasicOps-server-979678882", "status": "ACTIVE", "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "user_id": "a04a28d3bd7648abb04b59df0aeee0aa", "metadata": {}, "hostId": "02493b33938631ad8f061d4e969bb02fe0fa39297bdf231bc8414ffc", "image": {"id": "90acf026-cf3a-409a-999e-35d89bb9a6bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/90acf026-cf3a-409a-999e-35d89bb9a6bf"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:23:15Z", "updated": "2026-01-26T17:23:29Z", "addresses": {"tempest-network-smoke--2054182957": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d9:71:2d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cf6218c0-bc2c-4097-91df-f60657ef7ab1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cf6218c0-bc2c-4097-91df-f60657ef7ab1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1082130080", "OS-SRV-USG:launched_at": "2026-01-26T17:23:29.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-2094967954"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.645 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cf6218c0-bc2c-4097-91df-f60657ef7ab1 used request id req-642532d7-fc7c-4317-8cb6-349a94a630ba request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.646 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cf6218c0-bc2c-4097-91df-f60657ef7ab1', 'name': 'tempest-TestNetworkBasicOps-server-979678882', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '72e07b00ccf54deaa85258e2c3332b45', 'user_id': 'a04a28d3bd7648abb04b59df0aeee0aa', 'hostId': '02493b33938631ad8f061d4e969bb02fe0fa39297bdf231bc8414ffc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.649 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f9b0315f-2a3c-471e-b629-b19d90a40a97 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:24:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:32.650 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f9b0315f-2a3c-471e-b629-b19d90a40a97 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.631 185393 DEBUG nova.compute.manager [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.632 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.632 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.632 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.632 185393 DEBUG nova.compute.manager [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Processing event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 DEBUG nova.compute.manager [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 DEBUG oslo_concurrency.lockutils [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 DEBUG nova.compute.manager [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] No waiting events found dispatching network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.633 185393 WARNING nova.compute.manager [req-229d50fc-422b-49c0-bbaf-29e9e91f81d6 req-6d1ea261-71cc-4a6e-993c-52770ec4dfbb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received unexpected event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 for instance with vm_state building and task_state spawning.
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.634 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Instance event wait completed in 11 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.639 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448273.6388962, f9b0315f-2a3c-471e-b629-b19d90a40a97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.640 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] VM Resumed (Lifecycle Event)
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.642 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.648 185393 INFO nova.virt.libvirt.driver [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Instance spawned successfully.
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.648 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.662 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.676 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.681 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.682 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.682 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.683 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.683 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.683 185393 DEBUG nova.virt.libvirt.driver [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.708 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.753 185393 INFO nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Took 23.76 seconds to spawn the instance on the hypervisor.
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.753 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1691 Content-Type: application/json Date: Mon, 26 Jan 2026 17:24:32 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f7dfcbb2-d07a-42da-aa8e-873ba342cea4 x-openstack-request-id: req-f7dfcbb2-d07a-42da-aa8e-873ba342cea4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.754 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f9b0315f-2a3c-471e-b629-b19d90a40a97", "name": "te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c", "status": "BUILD", "tenant_id": "237a863555d84bd386855d9cf781beb4", "user_id": "5ca35c18e54b493f9efdfe2218cce3c7", "metadata": {"metering.server_group": "21873820-28a9-4731-9256-efbf2eb46b4d"}, "hostId": "d53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42", "image": {"id": "a3153c85-d830-4fd6-8cd6-1a69e6723a9e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a3153c85-d830-4fd6-8cd6-1a69e6723a9e"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:24:07Z", "updated": "2026-01-26T17:24:09Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f9b0315f-2a3c-471e-b629-b19d90a40a97"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f9b0315f-2a3c-471e-b629-b19d90a40a97"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.754 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f9b0315f-2a3c-471e-b629-b19d90a40a97 used request id req-f7dfcbb2-d07a-42da-aa8e-873ba342cea4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.754 185393 DEBUG nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.755 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.755 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:24:33.756429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.806 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.806 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.849 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.850 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.latency volume: 3508090658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.852 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.853 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.853 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:24:33.852477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.855 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:24:33.854854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.855 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.856 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.856 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.857 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:24:33.857249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.862 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cf6218c0-bc2c-4097-91df-f60657ef7ab1 / tap994f4b51-01 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.863 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.869 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f9b0315f-2a3c-471e-b629-b19d90a40a97 / tap4ea974be-d9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.869 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.870 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.871 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T17:24:33.871186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.871 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-979678882>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-979678882>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.872 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:24:33.872643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.898 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/cpu volume: 35260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.936 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.937 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.938 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.incoming.packets volume: 111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.938 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:24:33.938238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.939 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:24:33.940002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.941 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.942 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:24:33.941736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:24:33.943678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.944 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.outgoing.bytes volume: 15770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.944 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.945 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:24:33.945649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.946 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.947 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.947 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.outgoing.packets volume: 105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.947 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:24:33.947088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:24:33.948728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.949 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.949 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.949 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T17:24:33.950411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.950 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-979678882>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-979678882>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.951 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:24:33.951706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.952 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.952 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.953 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.954 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/memory.usage volume: 42.58203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:24:33.953705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.954 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.954 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance f9b0315f-2a3c-471e-b629-b19d90a40a97: ceilometer.compute.pollsters.NoVolumeException
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:24:33.955652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.955 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.956 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:24:33.957555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.958 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.incoming.bytes volume: 19850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.958 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:24:33.959544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.960 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.960 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:24:33.961885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.970 185393 INFO nova.compute.manager [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Took 24.87 seconds to build instance.
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.978 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.979 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 nova_compute[185389]: 2026-01-26 17:24:33.990 185393 DEBUG oslo_concurrency.lockutils [None req-eda9566d-ec1b-40c5-ae81-b0f35bc1cc80 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.997 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.997 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:33 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:33.999 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.000 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.000 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.000 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:24:33.999314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.bytes volume: 30366208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:24:34.002464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.003 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.003 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.003 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.004 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.005 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.latency volume: 516390232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:24:34.005054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.005 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.latency volume: 61748607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.006 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.006 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.007 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.008 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.008 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:24:34.007934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.010 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:24:34.010504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.011 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.011 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.012 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:24:34.013373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.013 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.014 14 DEBUG ceilometer.compute.pollsters [-] cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.014 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.014 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:34 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:24:34.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:24:35 compute-0 nova_compute[185389]: 2026-01-26 17:24:35.211 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:35 compute-0 nova_compute[185389]: 2026-01-26 17:24:35.938 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:36 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:36.541 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:24:36 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:36.543 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:24:36 compute-0 nova_compute[185389]: 2026-01-26 17:24:36.542 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:36 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:36.543 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:37 compute-0 nova_compute[185389]: 2026-01-26 17:24:37.221 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:37 compute-0 nova_compute[185389]: 2026-01-26 17:24:37.222 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:37 compute-0 nova_compute[185389]: 2026-01-26 17:24:37.280 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:40 compute-0 nova_compute[185389]: 2026-01-26 17:24:40.214 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:40 compute-0 nova_compute[185389]: 2026-01-26 17:24:40.616 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448262.429169, 186e87cb-beb9-48df-8b10-dfc5c8afe996 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:40 compute-0 nova_compute[185389]: 2026-01-26 17:24:40.617 185393 INFO nova.compute.manager [-] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] VM Stopped (Lifecycle Event)
Jan 26 17:24:40 compute-0 nova_compute[185389]: 2026-01-26 17:24:40.661 185393 DEBUG nova.compute.manager [None req-17a80bd8-a5d2-49c5-8ec1-0607c2cf33b2 - - - - - -] [instance: 186e87cb-beb9-48df-8b10-dfc5c8afe996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:40 compute-0 nova_compute[185389]: 2026-01-26 17:24:40.941 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:41 compute-0 podman[258617]: 2026-01-26 17:24:41.210444994 +0000 UTC m=+0.083810175 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 26 17:24:41 compute-0 podman[258618]: 2026-01-26 17:24:41.226649094 +0000 UTC m=+0.097788176 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:24:41 compute-0 podman[258616]: 2026-01-26 17:24:41.231811434 +0000 UTC m=+0.106886872 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter)
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.827 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.829 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.852 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.935 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.936 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.944 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:24:41 compute-0 nova_compute[185389]: 2026-01-26 17:24:41.944 185393 INFO nova.compute.claims [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.124 185393 DEBUG nova.compute.provider_tree [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.143 185393 DEBUG nova.scheduler.client.report [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.170 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.172 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.248 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.249 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.282 185393 INFO nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.407 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.526 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.528 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.529 185393 INFO nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Creating image(s)
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.530 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.531 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.532 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.546 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.610 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.612 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.613 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.625 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.691 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.692 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.740 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.742 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.743 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.764 185393 DEBUG nova.policy [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04a28d3bd7648abb04b59df0aeee0aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72e07b00ccf54deaa85258e2c3332b45', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.805 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.806 185393 DEBUG nova.virt.disk.api [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Checking if we can resize image /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.807 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.865 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.867 185393 DEBUG nova.virt.disk.api [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Cannot resize image /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:24:42 compute-0 nova_compute[185389]: 2026-01-26 17:24:42.868 185393 DEBUG nova.objects.instance [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'migration_context' on Instance uuid a7263205-e4bb-4bdd-bdf4-a91586c033c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.041 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.043 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Ensure instance console log exists: /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.044 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.045 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.046 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:43 compute-0 podman[258695]: 2026-01-26 17:24:43.201222195 +0000 UTC m=+0.089534511 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:24:43 compute-0 ovn_controller[97699]: 2026-01-26T17:24:43Z|00137|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:24:43 compute-0 ovn_controller[97699]: 2026-01-26T17:24:43Z|00138|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:24:43 compute-0 nova_compute[185389]: 2026-01-26 17:24:43.772 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:44 compute-0 ovn_controller[97699]: 2026-01-26T17:24:44Z|00139|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:24:44 compute-0 ovn_controller[97699]: 2026-01-26T17:24:44Z|00140|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:24:44 compute-0 nova_compute[185389]: 2026-01-26 17:24:44.010 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:44 compute-0 podman[258722]: 2026-01-26 17:24:44.75670763 +0000 UTC m=+0.073389323 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:24:45 compute-0 nova_compute[185389]: 2026-01-26 17:24:45.112 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Successfully created port: 07768be0-5acf-4962-8e50-883ab34f0d88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:24:45 compute-0 nova_compute[185389]: 2026-01-26 17:24:45.218 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:45 compute-0 nova_compute[185389]: 2026-01-26 17:24:45.944 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.767 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Successfully updated port: 07768be0-5acf-4962-8e50-883ab34f0d88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.784 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.785 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquired lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.785 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.973 185393 DEBUG nova.compute.manager [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-changed-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.974 185393 DEBUG nova.compute.manager [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Refreshing instance network info cache due to event network-changed-07768be0-5acf-4962-8e50-883ab34f0d88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:24:46 compute-0 nova_compute[185389]: 2026-01-26 17:24:46.974 185393 DEBUG oslo_concurrency.lockutils [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:47 compute-0 nova_compute[185389]: 2026-01-26 17:24:47.080 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.442 185393 DEBUG nova.network.neutron [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updating instance_info_cache with network_info: [{"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.611 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Releasing lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.612 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Instance network_info: |[{"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.613 185393 DEBUG oslo_concurrency.lockutils [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.613 185393 DEBUG nova.network.neutron [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Refreshing network info cache for port 07768be0-5acf-4962-8e50-883ab34f0d88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.616 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Start _get_guest_xml network_info=[{"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.626 185393 WARNING nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.637 185393 DEBUG nova.virt.libvirt.host [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.637 185393 DEBUG nova.virt.libvirt.host [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.643 185393 DEBUG nova.virt.libvirt.host [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.644 185393 DEBUG nova.virt.libvirt.host [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.645 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.645 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.646 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.646 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.647 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.647 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.647 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.648 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.648 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.648 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.649 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.649 185393 DEBUG nova.virt.hardware [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.653 185393 DEBUG nova.virt.libvirt.vif [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1378260093',display_name='tempest-TestNetworkBasicOps-server-1378260093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1378260093',id=14,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcO5mlcyR223mUpsTonwEbNjH48jCuOU16DfMyIISz2MiI9UeUKnbTf4AFhGdeh1wBbW2VTzpLAYp/p1Wl4FSCUVlGhx9rmAJ1p58EqotOHx5lVxUSFYIJBimVjOiy9fA==',key_name='tempest-TestNetworkBasicOps-430105833',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-x295li7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:24:42Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=a7263205-e4bb-4bdd-bdf4-a91586c033c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.653 185393 DEBUG nova.network.os_vif_util [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.654 185393 DEBUG nova.network.os_vif_util [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.655 185393 DEBUG nova.objects.instance [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7263205-e4bb-4bdd-bdf4-a91586c033c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.823 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <uuid>a7263205-e4bb-4bdd-bdf4-a91586c033c2</uuid>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <name>instance-0000000e</name>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:name>tempest-TestNetworkBasicOps-server-1378260093</nova:name>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:24:48</nova:creationTime>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:user uuid="a04a28d3bd7648abb04b59df0aeee0aa">tempest-TestNetworkBasicOps-420464940-project-member</nova:user>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:project uuid="72e07b00ccf54deaa85258e2c3332b45">tempest-TestNetworkBasicOps-420464940</nova:project>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         <nova:port uuid="07768be0-5acf-4962-8e50-883ab34f0d88">
Jan 26 17:24:48 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <system>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="serial">a7263205-e4bb-4bdd-bdf4-a91586c033c2</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="uuid">a7263205-e4bb-4bdd-bdf4-a91586c033c2</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </system>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <os>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </os>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <features>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </features>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.config"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:c2:6d:94"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <target dev="tap07768be0-5a"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/console.log" append="off"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <video>
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </video>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:24:48 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:24:48 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:24:48 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:24:48 compute-0 nova_compute[185389]: </domain>
Jan 26 17:24:48 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.824 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Preparing to wait for external event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.825 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.825 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.826 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.827 185393 DEBUG nova.virt.libvirt.vif [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1378260093',display_name='tempest-TestNetworkBasicOps-server-1378260093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1378260093',id=14,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcO5mlcyR223mUpsTonwEbNjH48jCuOU16DfMyIISz2MiI9UeUKnbTf4AFhGdeh1wBbW2VTzpLAYp/p1Wl4FSCUVlGhx9rmAJ1p58EqotOHx5lVxUSFYIJBimVjOiy9fA==',key_name='tempest-TestNetworkBasicOps-430105833',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-x295li7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:24:42Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=a7263205-e4bb-4bdd-bdf4-a91586c033c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.827 185393 DEBUG nova.network.os_vif_util [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.828 185393 DEBUG nova.network.os_vif_util [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.828 185393 DEBUG os_vif [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.829 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.829 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.830 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.838 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.838 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07768be0-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.839 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07768be0-5a, col_values=(('external_ids', {'iface-id': '07768be0-5acf-4962-8e50-883ab34f0d88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:6d:94', 'vm-uuid': 'a7263205-e4bb-4bdd-bdf4-a91586c033c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.840 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.842 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:24:48 compute-0 NetworkManager[56253]: <info>  [1769448288.8427] manager: (tap07768be0-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.857 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:48 compute-0 nova_compute[185389]: 2026-01-26 17:24:48.858 185393 INFO os_vif [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a')
Jan 26 17:24:48 compute-0 podman[258740]: 2026-01-26 17:24:48.932791671 +0000 UTC m=+0.070278958 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:48 compute-0 podman[258741]: 2026-01-26 17:24:48.963573267 +0000 UTC m=+0.085751179 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 17:24:48 compute-0 podman[258738]: 2026-01-26 17:24:48.998569127 +0000 UTC m=+0.136085185 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.172 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.172 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.173 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] No VIF found with MAC fa:16:3e:c2:6d:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.173 185393 INFO nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Using config drive
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.664 185393 INFO nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Creating config drive at /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.config
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.670 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxeqjhtkw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.799 185393 DEBUG oslo_concurrency.processutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxeqjhtkw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:49 compute-0 kernel: tap07768be0-5a: entered promiscuous mode
Jan 26 17:24:49 compute-0 NetworkManager[56253]: <info>  [1769448289.8673] manager: (tap07768be0-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Jan 26 17:24:49 compute-0 ovn_controller[97699]: 2026-01-26T17:24:49Z|00141|binding|INFO|Claiming lport 07768be0-5acf-4962-8e50-883ab34f0d88 for this chassis.
Jan 26 17:24:49 compute-0 ovn_controller[97699]: 2026-01-26T17:24:49Z|00142|binding|INFO|07768be0-5acf-4962-8e50-883ab34f0d88: Claiming fa:16:3e:c2:6d:94 10.100.0.3
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.883 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:6d:94 10.100.0.3'], port_security=['fa:16:3e:c2:6d:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a7263205-e4bb-4bdd-bdf4-a91586c033c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72e07b00ccf54deaa85258e2c3332b45', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6acf885d-146c-4aa7-b15f-05d4ceef5c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba7e92f1-bf2b-49e8-a683-c5ce4fc70674, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=07768be0-5acf-4962-8e50-883ab34f0d88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.888 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 07768be0-5acf-4962-8e50-883ab34f0d88 in datapath 181e9ee7-4b3f-4c71-9f87-ee525fae0a23 bound to our chassis
Jan 26 17:24:49 compute-0 ovn_controller[97699]: 2026-01-26T17:24:49Z|00143|binding|INFO|Setting lport 07768be0-5acf-4962-8e50-883ab34f0d88 ovn-installed in OVS
Jan 26 17:24:49 compute-0 ovn_controller[97699]: 2026-01-26T17:24:49Z|00144|binding|INFO|Setting lport 07768be0-5acf-4962-8e50-883ab34f0d88 up in Southbound
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.880 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.891 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 181e9ee7-4b3f-4c71-9f87-ee525fae0a23
Jan 26 17:24:49 compute-0 nova_compute[185389]: 2026-01-26 17:24:49.902 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:49 compute-0 systemd-udevd[258819]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.910 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7532a3-084e-4fe6-b828-867c40008363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:49 compute-0 systemd-machined[156679]: New machine qemu-15-instance-0000000e.
Jan 26 17:24:49 compute-0 NetworkManager[56253]: <info>  [1769448289.9264] device (tap07768be0-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:24:49 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Jan 26 17:24:49 compute-0 NetworkManager[56253]: <info>  [1769448289.9456] device (tap07768be0-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.947 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[e98e7151-bc5e-4f83-8744-110d54fc05ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.950 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb7fb3c-118f-429e-bb3a-0cf7fac389c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:49 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:49.984 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[724b9867-6cbb-4945-9454-5d11f738e3eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.002 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8318632a-482f-4965-a5e7-3601d545e1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181e9ee7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:aa:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685671, 'reachable_time': 44061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258832, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.018 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[30e6eb1f-e4e9-412f-9d77-df76bec2bc59]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap181e9ee7-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685686, 'tstamp': 685686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258833, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap181e9ee7-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685690, 'tstamp': 685690}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258833, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.020 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181e9ee7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.022 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.023 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.024 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap181e9ee7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.025 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.025 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap181e9ee7-40, col_values=(('external_ids', {'iface-id': 'dd4ac4a7-c264-4fc8-95aa-36a318cdf39e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:24:50 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:24:50.026 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.146 185393 DEBUG nova.compute.manager [req-7bcecd16-e49c-4f65-b820-eb801664414c req-2eebb75e-8f39-4d32-98d1-9d3a8881e686 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.146 185393 DEBUG oslo_concurrency.lockutils [req-7bcecd16-e49c-4f65-b820-eb801664414c req-2eebb75e-8f39-4d32-98d1-9d3a8881e686 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.146 185393 DEBUG oslo_concurrency.lockutils [req-7bcecd16-e49c-4f65-b820-eb801664414c req-2eebb75e-8f39-4d32-98d1-9d3a8881e686 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.147 185393 DEBUG oslo_concurrency.lockutils [req-7bcecd16-e49c-4f65-b820-eb801664414c req-2eebb75e-8f39-4d32-98d1-9d3a8881e686 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.147 185393 DEBUG nova.compute.manager [req-7bcecd16-e49c-4f65-b820-eb801664414c req-2eebb75e-8f39-4d32-98d1-9d3a8881e686 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Processing event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.462 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.463 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448290.4613454, a7263205-e4bb-4bdd-bdf4-a91586c033c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.463 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] VM Started (Lifecycle Event)
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.467 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.476 185393 INFO nova.virt.libvirt.driver [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Instance spawned successfully.
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.476 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.489 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.497 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.502 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.503 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.503 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.503 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.504 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.504 185393 DEBUG nova.virt.libvirt.driver [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.537 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.537 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448290.461478, a7263205-e4bb-4bdd-bdf4-a91586c033c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.537 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] VM Paused (Lifecycle Event)
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.578 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.585 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448290.4653995, a7263205-e4bb-4bdd-bdf4-a91586c033c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.585 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] VM Resumed (Lifecycle Event)
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.599 185393 INFO nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Took 8.07 seconds to spawn the instance on the hypervisor.
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.599 185393 DEBUG nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.611 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.616 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.648 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.705 185393 INFO nova.compute.manager [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Took 8.81 seconds to build instance.
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.729 185393 DEBUG oslo_concurrency.lockutils [None req-972e988c-a6d2-4816-a7f8-ccd11dfa5102 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.736 185393 DEBUG nova.network.neutron [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updated VIF entry in instance network info cache for port 07768be0-5acf-4962-8e50-883ab34f0d88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.736 185393 DEBUG nova.network.neutron [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updating instance_info_cache with network_info: [{"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.755 185393 DEBUG oslo_concurrency.lockutils [req-ea8ef700-fc7b-4608-8068-fde7606a7336 req-9a08b97d-17df-4852-857b-b856e4c23f11 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:50 compute-0 nova_compute[185389]: 2026-01-26 17:24:50.945 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.015 185393 DEBUG nova.compute.manager [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.016 185393 DEBUG oslo_concurrency.lockutils [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.016 185393 DEBUG oslo_concurrency.lockutils [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.017 185393 DEBUG oslo_concurrency.lockutils [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.017 185393 DEBUG nova.compute.manager [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] No waiting events found dispatching network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.018 185393 WARNING nova.compute.manager [req-d2ba2d18-dfd7-4d4b-97eb-f9551ad981e3 req-ab740caf-8932-4454-80a8-2664085e3ecb 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received unexpected event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 for instance with vm_state active and task_state None.
Jan 26 17:24:53 compute-0 NetworkManager[56253]: <info>  [1769448293.5749] manager: (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 26 17:24:53 compute-0 NetworkManager[56253]: <info>  [1769448293.5781] manager: (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.593 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.731 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:53 compute-0 ovn_controller[97699]: 2026-01-26T17:24:53Z|00145|binding|INFO|Releasing lport dd4ac4a7-c264-4fc8-95aa-36a318cdf39e from this chassis (sb_readonly=0)
Jan 26 17:24:53 compute-0 ovn_controller[97699]: 2026-01-26T17:24:53Z|00146|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.785 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:53 compute-0 nova_compute[185389]: 2026-01-26 17:24:53.841 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:54 compute-0 nova_compute[185389]: 2026-01-26 17:24:54.448 185393 DEBUG nova.compute.manager [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-changed-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:24:54 compute-0 nova_compute[185389]: 2026-01-26 17:24:54.449 185393 DEBUG nova.compute.manager [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Refreshing instance network info cache due to event network-changed-07768be0-5acf-4962-8e50-883ab34f0d88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:24:54 compute-0 nova_compute[185389]: 2026-01-26 17:24:54.450 185393 DEBUG oslo_concurrency.lockutils [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:24:54 compute-0 nova_compute[185389]: 2026-01-26 17:24:54.450 185393 DEBUG oslo_concurrency.lockutils [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:24:54 compute-0 nova_compute[185389]: 2026-01-26 17:24:54.451 185393 DEBUG nova.network.neutron [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Refreshing network info cache for port 07768be0-5acf-4962-8e50-883ab34f0d88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:24:55 compute-0 nova_compute[185389]: 2026-01-26 17:24:55.948 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:57 compute-0 nova_compute[185389]: 2026-01-26 17:24:57.226 185393 DEBUG nova.network.neutron [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updated VIF entry in instance network info cache for port 07768be0-5acf-4962-8e50-883ab34f0d88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:24:57 compute-0 nova_compute[185389]: 2026-01-26 17:24:57.229 185393 DEBUG nova.network.neutron [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updating instance_info_cache with network_info: [{"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:24:57 compute-0 nova_compute[185389]: 2026-01-26 17:24:57.278 185393 DEBUG oslo_concurrency.lockutils [req-d1128fc6-c6cd-49c0-ba2c-51ff34608c1f req-be27cd31-ffc6-4b79-b04c-d84eb291bfef 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-a7263205-e4bb-4bdd-bdf4-a91586c033c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:24:58 compute-0 nova_compute[185389]: 2026-01-26 17:24:58.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:59 compute-0 podman[201244]: time="2026-01-26T17:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:24:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:24:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4848 "" "Go-http-client/1.1"
Jan 26 17:25:00 compute-0 nova_compute[185389]: 2026-01-26 17:25:00.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:01 compute-0 openstack_network_exporter[204387]: ERROR   17:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:25:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:25:01 compute-0 openstack_network_exporter[204387]: ERROR   17:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:25:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:25:01 compute-0 nova_compute[185389]: 2026-01-26 17:25:01.683 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:01.780 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:01.781 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:01.782 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:03 compute-0 nova_compute[185389]: 2026-01-26 17:25:03.847 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:05 compute-0 nova_compute[185389]: 2026-01-26 17:25:05.957 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:07 compute-0 nova_compute[185389]: 2026-01-26 17:25:07.966 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:08 compute-0 nova_compute[185389]: 2026-01-26 17:25:08.851 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:10 compute-0 ovn_controller[97699]: 2026-01-26T17:25:10Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:9e:d9 10.100.3.123
Jan 26 17:25:10 compute-0 ovn_controller[97699]: 2026-01-26T17:25:10Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:9e:d9 10.100.3.123
Jan 26 17:25:10 compute-0 nova_compute[185389]: 2026-01-26 17:25:10.960 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:12 compute-0 podman[258876]: 2026-01-26 17:25:12.224214074 +0000 UTC m=+0.089396148 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 26 17:25:12 compute-0 podman[258875]: 2026-01-26 17:25:12.226995369 +0000 UTC m=+0.092386289 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal)
Jan 26 17:25:12 compute-0 podman[258877]: 2026-01-26 17:25:12.239430457 +0000 UTC m=+0.104795666 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:25:13 compute-0 nova_compute[185389]: 2026-01-26 17:25:13.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:13 compute-0 nova_compute[185389]: 2026-01-26 17:25:13.855 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:14 compute-0 podman[258935]: 2026-01-26 17:25:14.185585836 +0000 UTC m=+0.075965903 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:25:15 compute-0 podman[258956]: 2026-01-26 17:25:15.203390256 +0000 UTC m=+0.084350300 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:25:15 compute-0 nova_compute[185389]: 2026-01-26 17:25:15.317 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:15 compute-0 nova_compute[185389]: 2026-01-26 17:25:15.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:15 compute-0 nova_compute[185389]: 2026-01-26 17:25:15.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:25:15 compute-0 nova_compute[185389]: 2026-01-26 17:25:15.962 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:17 compute-0 nova_compute[185389]: 2026-01-26 17:25:17.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:17 compute-0 nova_compute[185389]: 2026-01-26 17:25:17.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:25:18 compute-0 nova_compute[185389]: 2026-01-26 17:25:18.165 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:25:18 compute-0 nova_compute[185389]: 2026-01-26 17:25:18.166 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:25:18 compute-0 nova_compute[185389]: 2026-01-26 17:25:18.167 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:25:18 compute-0 nova_compute[185389]: 2026-01-26 17:25:18.858 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:19 compute-0 podman[258978]: 2026-01-26 17:25:19.217210324 +0000 UTC m=+0.092990386 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Jan 26 17:25:19 compute-0 podman[258977]: 2026-01-26 17:25:19.229408865 +0000 UTC m=+0.109533494 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 26 17:25:19 compute-0 podman[258976]: 2026-01-26 17:25:19.242317246 +0000 UTC m=+0.129953249 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.473 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [{"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.701 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-cf6218c0-bc2c-4097-91df-f60657ef7ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.702 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.702 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.702 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:20 compute-0 nova_compute[185389]: 2026-01-26 17:25:20.964 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:23 compute-0 nova_compute[185389]: 2026-01-26 17:25:23.861 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:24 compute-0 nova_compute[185389]: 2026-01-26 17:25:24.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.173 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.174 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.174 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.175 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.319 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 ovn_controller[97699]: 2026-01-26T17:25:25Z|00147|memory|INFO|peak resident set size grew 50% in last 3952.2 seconds, from 16000 kB to 24012 kB
Jan 26 17:25:25 compute-0 ovn_controller[97699]: 2026-01-26T17:25:25Z|00148|memory|INFO|idl-cells-OVN_Southbound:10096 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:323 lflow-cache-entries-cache-matches:280 lflow-cache-size-KB:1365 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:598 ofctrl_installed_flow_usage-KB:435 ofctrl_sb_flow_ref_usage-KB:228
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.387 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.388 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.458 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.486 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.571 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.573 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.640 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.648 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.712 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.717 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.776 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:25 compute-0 nova_compute[185389]: 2026-01-26 17:25:25.966 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.173 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.175 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.2589111328125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.176 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.176 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:26 compute-0 ovn_controller[97699]: 2026-01-26T17:25:26Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:6d:94 10.100.0.3
Jan 26 17:25:26 compute-0 ovn_controller[97699]: 2026-01-26T17:25:26Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:6d:94 10.100.0.3
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.889 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance cf6218c0-bc2c-4097-91df-f60657ef7ab1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.890 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.890 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance a7263205-e4bb-4bdd-bdf4-a91586c033c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.891 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:25:26 compute-0 nova_compute[185389]: 2026-01-26 17:25:26.891 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:25:27 compute-0 nova_compute[185389]: 2026-01-26 17:25:27.107 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:25:27 compute-0 nova_compute[185389]: 2026-01-26 17:25:27.128 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:25:27 compute-0 nova_compute[185389]: 2026-01-26 17:25:27.154 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:25:27 compute-0 nova_compute[185389]: 2026-01-26 17:25:27.159 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:28 compute-0 nova_compute[185389]: 2026-01-26 17:25:28.863 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:29 compute-0 podman[201244]: time="2026-01-26T17:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:25:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:25:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4848 "" "Go-http-client/1.1"
Jan 26 17:25:30 compute-0 nova_compute[185389]: 2026-01-26 17:25:30.969 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:31 compute-0 openstack_network_exporter[204387]: ERROR   17:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:25:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:25:31 compute-0 openstack_network_exporter[204387]: ERROR   17:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:25:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.577 185393 INFO nova.compute.manager [None req-fe549dce-ca90-4e91-9aed-ed575d78f239 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Get console output
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.585 238630 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.868 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.869 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.870 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.870 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.871 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.872 185393 INFO nova.compute.manager [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Terminating instance
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.874 185393 DEBUG nova.compute.manager [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:25:31 compute-0 kernel: tap07768be0-5a (unregistering): left promiscuous mode
Jan 26 17:25:31 compute-0 NetworkManager[56253]: <info>  [1769448331.9073] device (tap07768be0-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.927 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:31 compute-0 ovn_controller[97699]: 2026-01-26T17:25:31Z|00149|binding|INFO|Releasing lport 07768be0-5acf-4962-8e50-883ab34f0d88 from this chassis (sb_readonly=0)
Jan 26 17:25:31 compute-0 ovn_controller[97699]: 2026-01-26T17:25:31Z|00150|binding|INFO|Setting lport 07768be0-5acf-4962-8e50-883ab34f0d88 down in Southbound
Jan 26 17:25:31 compute-0 ovn_controller[97699]: 2026-01-26T17:25:31Z|00151|binding|INFO|Removing iface tap07768be0-5a ovn-installed in OVS
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.933 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:31 compute-0 nova_compute[185389]: 2026-01-26 17:25:31.948 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 26 17:25:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 38.076s CPU time.
Jan 26 17:25:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:31.976 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:6d:94 10.100.0.3'], port_security=['fa:16:3e:c2:6d:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a7263205-e4bb-4bdd-bdf4-a91586c033c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72e07b00ccf54deaa85258e2c3332b45', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6acf885d-146c-4aa7-b15f-05d4ceef5c7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba7e92f1-bf2b-49e8-a683-c5ce4fc70674, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=07768be0-5acf-4962-8e50-883ab34f0d88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:25:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:31.978 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 07768be0-5acf-4962-8e50-883ab34f0d88 in datapath 181e9ee7-4b3f-4c71-9f87-ee525fae0a23 unbound from our chassis
Jan 26 17:25:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:31.981 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 181e9ee7-4b3f-4c71-9f87-ee525fae0a23
Jan 26 17:25:31 compute-0 systemd-machined[156679]: Machine qemu-15-instance-0000000e terminated.
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.010 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0e39d862-6d8f-4edc-aaa2-45d80869d3d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.050 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[e89866df-8a29-458f-ace1-a2f8f8ff6483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.055 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc6aeed-a7df-4943-966c-a51bbd585607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.090 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[d075306b-1649-4940-a805-8e3b10bfe087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.106 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.114 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.120 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6a44805b-1e38-4b54-8c41-3cc371d232bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap181e9ee7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:aa:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685671, 'reachable_time': 44061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259080, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.147 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[595822f6-c6d7-4178-9cd4-70be5798206c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap181e9ee7-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685686, 'tstamp': 685686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259090, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap181e9ee7-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685690, 'tstamp': 685690}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259090, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.150 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181e9ee7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.152 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.164 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.167 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap181e9ee7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.168 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.169 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap181e9ee7-40, col_values=(('external_ids', {'iface-id': 'dd4ac4a7-c264-4fc8-95aa-36a318cdf39e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.168 185393 INFO nova.virt.libvirt.driver [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Instance destroyed successfully.
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.168 185393 DEBUG nova.objects.instance [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'resources' on Instance uuid a7263205-e4bb-4bdd-bdf4-a91586c033c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:25:32 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:32.169 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.316 185393 DEBUG nova.virt.libvirt.vif [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:24:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1378260093',display_name='tempest-TestNetworkBasicOps-server-1378260093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1378260093',id=14,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcO5mlcyR223mUpsTonwEbNjH48jCuOU16DfMyIISz2MiI9UeUKnbTf4AFhGdeh1wBbW2VTzpLAYp/p1Wl4FSCUVlGhx9rmAJ1p58EqotOHx5lVxUSFYIJBimVjOiy9fA==',key_name='tempest-TestNetworkBasicOps-430105833',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:24:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-x295li7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:24:50Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=a7263205-e4bb-4bdd-bdf4-a91586c033c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.317 185393 DEBUG nova.network.os_vif_util [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "07768be0-5acf-4962-8e50-883ab34f0d88", "address": "fa:16:3e:c2:6d:94", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07768be0-5a", "ovs_interfaceid": "07768be0-5acf-4962-8e50-883ab34f0d88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.319 185393 DEBUG nova.network.os_vif_util [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.319 185393 DEBUG os_vif [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.322 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.322 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07768be0-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.325 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.328 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.332 185393 INFO os_vif [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:6d:94,bridge_name='br-int',has_traffic_filtering=True,id=07768be0-5acf-4962-8e50-883ab34f0d88,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07768be0-5a')
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.334 185393 INFO nova.virt.libvirt.driver [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Deleting instance files /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2_del
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.335 185393 INFO nova.virt.libvirt.driver [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Deletion of /var/lib/nova/instances/a7263205-e4bb-4bdd-bdf4-a91586c033c2_del complete
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.403 185393 INFO nova.compute.manager [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Took 0.53 seconds to destroy the instance on the hypervisor.
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.416 185393 DEBUG oslo.service.loopingcall [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.417 185393 DEBUG nova.compute.manager [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:25:32 compute-0 nova_compute[185389]: 2026-01-26 17:25:32.417 185393 DEBUG nova.network.neutron [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.804 185393 DEBUG nova.compute.manager [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-unplugged-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.805 185393 DEBUG oslo_concurrency.lockutils [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.806 185393 DEBUG oslo_concurrency.lockutils [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.806 185393 DEBUG oslo_concurrency.lockutils [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.807 185393 DEBUG nova.compute.manager [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] No waiting events found dispatching network-vif-unplugged-07768be0-5acf-4962-8e50-883ab34f0d88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:25:33 compute-0 nova_compute[185389]: 2026-01-26 17:25:33.807 185393 DEBUG nova.compute.manager [req-0489fa12-c008-432c-a227-60943a507157 req-06e446d7-82ac-4518-a5ca-687f54de17db 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-unplugged-07768be0-5acf-4962-8e50-883ab34f0d88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:25:34 compute-0 nova_compute[185389]: 2026-01-26 17:25:34.807 185393 DEBUG nova.network.neutron [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:34 compute-0 nova_compute[185389]: 2026-01-26 17:25:34.826 185393 INFO nova.compute.manager [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Took 2.41 seconds to deallocate network for instance.
Jan 26 17:25:34 compute-0 nova_compute[185389]: 2026-01-26 17:25:34.892 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:34 compute-0 nova_compute[185389]: 2026-01-26 17:25:34.893 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.047 185393 DEBUG nova.compute.provider_tree [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.065 185393 DEBUG nova.scheduler.client.report [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.093 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.128 185393 INFO nova.scheduler.client.report [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Deleted allocations for instance a7263205-e4bb-4bdd-bdf4-a91586c033c2
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.158 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.159 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.160 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.162 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.209 185393 DEBUG nova.compute.manager [req-46e22358-0781-4ba9-a9eb-1a2d082a06d2 req-ddb8221c-260c-4957-a38f-29eff2eafd91 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-deleted-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.214 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.262 185393 DEBUG oslo_concurrency.lockutils [None req-fbaaee7e-6483-493f-aef4-c3a6e7519198 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.290 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.291 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.297 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.298 185393 INFO nova.compute.claims [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.454 185393 DEBUG nova.compute.provider_tree [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.469 185393 DEBUG nova.scheduler.client.report [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.507 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.508 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.574 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.574 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.597 185393 INFO nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.617 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.743 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.750 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.751 185393 INFO nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Creating image(s)
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.752 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.752 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.753 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.768 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.827 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.829 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.830 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.842 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.874 185393 DEBUG nova.policy [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f28bfbc50d234cffbe617e420542c11d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b66458547a0a47a3bec4b3808c40db40', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.901 185393 DEBUG nova.compute.manager [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.902 185393 DEBUG oslo_concurrency.lockutils [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.903 185393 DEBUG oslo_concurrency.lockutils [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.903 185393 DEBUG oslo_concurrency.lockutils [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "a7263205-e4bb-4bdd-bdf4-a91586c033c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.904 185393 DEBUG nova.compute.manager [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] No waiting events found dispatching network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.905 185393 WARNING nova.compute.manager [req-fdd3d96e-81ed-42d8-9244-39cd91c9df20 req-83d11cff-ff2c-4db8-9472-f11e2c242296 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Received unexpected event network-vif-plugged-07768be0-5acf-4962-8e50-883ab34f0d88 for instance with vm_state deleted and task_state None.
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.907 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.908 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:35 compute-0 nova_compute[185389]: 2026-01-26 17:25:35.971 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.111 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493,backing_fmt=raw /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk 1073741824" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.112 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "fce4846b88d54584e094a4cd72b3ae3b642ea493" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.113 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.181 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/fce4846b88d54584e094a4cd72b3ae3b642ea493 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.182 185393 DEBUG nova.virt.disk.api [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Checking if we can resize image /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.183 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.249 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.251 185393 DEBUG nova.virt.disk.api [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Cannot resize image /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.252 185393 DEBUG nova.objects.instance [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lazy-loading 'migration_context' on Instance uuid e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.309 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.310 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Ensure instance console log exists: /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.311 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.312 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:36 compute-0 nova_compute[185389]: 2026-01-26 17:25:36.312 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:37 compute-0 nova_compute[185389]: 2026-01-26 17:25:37.164 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:37.166 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:25:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:37.166 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:25:37 compute-0 nova_compute[185389]: 2026-01-26 17:25:37.287 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Successfully created port: d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:25:37 compute-0 nova_compute[185389]: 2026-01-26 17:25:37.326 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.842 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.843 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.844 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.845 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.845 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.847 185393 INFO nova.compute.manager [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Terminating instance
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.848 185393 DEBUG nova.compute.manager [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:25:38 compute-0 kernel: tap994f4b51-01 (unregistering): left promiscuous mode
Jan 26 17:25:38 compute-0 NetworkManager[56253]: <info>  [1769448338.8969] device (tap994f4b51-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:25:38 compute-0 ovn_controller[97699]: 2026-01-26T17:25:38Z|00152|binding|INFO|Releasing lport 994f4b51-014f-469e-9096-4ffe2dafa019 from this chassis (sb_readonly=0)
Jan 26 17:25:38 compute-0 ovn_controller[97699]: 2026-01-26T17:25:38Z|00153|binding|INFO|Setting lport 994f4b51-014f-469e-9096-4ffe2dafa019 down in Southbound
Jan 26 17:25:38 compute-0 ovn_controller[97699]: 2026-01-26T17:25:38Z|00154|binding|INFO|Removing iface tap994f4b51-01 ovn-installed in OVS
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:38.922 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:71:2d 10.100.0.13'], port_security=['fa:16:3e:d9:71:2d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cf6218c0-bc2c-4097-91df-f60657ef7ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72e07b00ccf54deaa85258e2c3332b45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cabd41bb-de87-4531-96ff-89d10e2bc223', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba7e92f1-bf2b-49e8-a683-c5ce4fc70674, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=994f4b51-014f-469e-9096-4ffe2dafa019) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:25:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:38.924 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 994f4b51-014f-469e-9096-4ffe2dafa019 in datapath 181e9ee7-4b3f-4c71-9f87-ee525fae0a23 unbound from our chassis
Jan 26 17:25:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:38.928 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 181e9ee7-4b3f-4c71-9f87-ee525fae0a23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:25:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:38.929 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cd571e5b-ccce-4c8c-8074-0e61faa27dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:38 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:38.932 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23 namespace which is not needed anymore
Jan 26 17:25:38 compute-0 nova_compute[185389]: 2026-01-26 17:25:38.942 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:38 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 26 17:25:38 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 48.910s CPU time.
Jan 26 17:25:38 compute-0 systemd-machined[156679]: Machine qemu-12-instance-0000000b terminated.
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.033 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Successfully updated port: d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.046 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.047 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquired lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.047 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.139 185393 DEBUG nova.compute.manager [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-changed-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.140 185393 DEBUG nova.compute.manager [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Refreshing instance network info cache due to event network-changed-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.142 185393 DEBUG oslo_concurrency.lockutils [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.150 185393 INFO nova.virt.libvirt.driver [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Instance destroyed successfully.
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.151 185393 DEBUG nova.objects.instance [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lazy-loading 'resources' on Instance uuid cf6218c0-bc2c-4097-91df-f60657ef7ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.164 185393 DEBUG nova.virt.libvirt.vif [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-979678882',display_name='tempest-TestNetworkBasicOps-server-979678882',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-979678882',id=11,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+jha5o/aZq5uZdccmmJmVbVXMmdJ9yvermTWC6rreNImtyIBQbEkIIBt+QllF3Pluku08MzARjYDJ2ncgmid88GHIWnOSOFYqddg/+d8y/J6sZxMXgV9oLcscbo2PVKg==',key_name='tempest-TestNetworkBasicOps-1082130080',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:23:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72e07b00ccf54deaa85258e2c3332b45',ramdisk_id='',reservation_id='r-kk5vnpdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-420464940',owner_user_name='tempest-TestNetworkBasicOps-420464940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:23:29Z,user_data=None,user_id='a04a28d3bd7648abb04b59df0aeee0aa',uuid=cf6218c0-bc2c-4097-91df-f60657ef7ab1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.164 185393 DEBUG nova.network.os_vif_util [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converting VIF {"id": "994f4b51-014f-469e-9096-4ffe2dafa019", "address": "fa:16:3e:d9:71:2d", "network": {"id": "181e9ee7-4b3f-4c71-9f87-ee525fae0a23", "bridge": "br-int", "label": "tempest-network-smoke--2054182957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72e07b00ccf54deaa85258e2c3332b45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994f4b51-01", "ovs_interfaceid": "994f4b51-014f-469e-9096-4ffe2dafa019", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.165 185393 DEBUG nova.network.os_vif_util [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.166 185393 DEBUG os_vif [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.168 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994f4b51-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.170 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.172 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.173 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.176 185393 INFO os_vif [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:71:2d,bridge_name='br-int',has_traffic_filtering=True,id=994f4b51-014f-469e-9096-4ffe2dafa019,network=Network(181e9ee7-4b3f-4c71-9f87-ee525fae0a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994f4b51-01')
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.177 185393 INFO nova.virt.libvirt.driver [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Deleting instance files /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1_del
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.178 185393 INFO nova.virt.libvirt.driver [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Deletion of /var/lib/nova/instances/cf6218c0-bc2c-4097-91df-f60657ef7ab1_del complete
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.227 185393 INFO nova.compute.manager [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Took 0.38 seconds to destroy the instance on the hypervisor.
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.228 185393 DEBUG oslo.service.loopingcall [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.229 185393 DEBUG nova.compute.manager [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.230 185393 DEBUG nova.network.neutron [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:25:39 compute-0 nova_compute[185389]: 2026-01-26 17:25:39.289 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [NOTICE]   (257794) : haproxy version is 2.8.14-c23fe91
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [NOTICE]   (257794) : path to executable is /usr/sbin/haproxy
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [WARNING]  (257794) : Exiting Master process...
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [WARNING]  (257794) : Exiting Master process...
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [ALERT]    (257794) : Current worker (257796) exited with code 143 (Terminated)
Jan 26 17:25:39 compute-0 neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23[257790]: [WARNING]  (257794) : All workers exited. Exiting... (0)
Jan 26 17:25:39 compute-0 systemd[1]: libpod-b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902.scope: Deactivated successfully.
Jan 26 17:25:39 compute-0 podman[259135]: 2026-01-26 17:25:39.882481186 +0000 UTC m=+0.821485481 container died b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902-userdata-shm.mount: Deactivated successfully.
Jan 26 17:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c9620c5cf6ab314a2c5b3f74964911d8572b82cf7918efbba9e4920e5333fd-merged.mount: Deactivated successfully.
Jan 26 17:25:40 compute-0 podman[259135]: 2026-01-26 17:25:40.318579674 +0000 UTC m=+1.257583969 container cleanup b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:25:40 compute-0 systemd[1]: libpod-conmon-b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902.scope: Deactivated successfully.
Jan 26 17:25:40 compute-0 podman[259177]: 2026-01-26 17:25:40.550730166 +0000 UTC m=+0.193856933 container remove b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.561 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f6dcd7-983d-49f7-9bf1-5398e8bad78c]: (4, ('Mon Jan 26 05:25:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23 (b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902)\nb4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902\nMon Jan 26 05:25:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23 (b4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902)\nb4506e9329afec0d89c7cc6b898c94cbf401314416d572edd3b8d3ebf77d8902\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.563 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[359d10e3-16b7-4d49-a4c1-d39950996268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.564 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap181e9ee7-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:40 compute-0 nova_compute[185389]: 2026-01-26 17:25:40.565 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:40 compute-0 kernel: tap181e9ee7-40: left promiscuous mode
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.572 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[255f3a61-c1b5-4d53-86c8-e4e16aba06c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 nova_compute[185389]: 2026-01-26 17:25:40.591 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.594 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b44f62-db65-4af7-91d3-03208d8acf48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.595 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbd303c-ad6a-4e21-b1cc-598aab167df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.612 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f30c7d6c-e6e5-442f-9d94-5156cdf17920]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685662, 'reachable_time': 39549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259190, 'error': None, 'target': 'ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d181e9ee7\x2d4b3f\x2d4c71\x2d9f87\x2dee525fae0a23.mount: Deactivated successfully.
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.615 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-181e9ee7-4b3f-4c71-9f87-ee525fae0a23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:25:40 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:40.615 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbbef75-e1cd-4ebf-af62-4447841e226d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:40 compute-0 nova_compute[185389]: 2026-01-26 17:25:40.974 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.426 185393 DEBUG nova.compute.manager [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-unplugged-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.429 185393 DEBUG oslo_concurrency.lockutils [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.429 185393 DEBUG oslo_concurrency.lockutils [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.430 185393 DEBUG oslo_concurrency.lockutils [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.430 185393 DEBUG nova.compute.manager [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] No waiting events found dispatching network-vif-unplugged-994f4b51-014f-469e-9096-4ffe2dafa019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.431 185393 DEBUG nova.compute.manager [req-bcb801ce-2361-403a-9551-8d2e47ec4441 req-3bafba44-55b8-4396-b5b3-a8c57bfdb4fc 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-unplugged-994f4b51-014f-469e-9096-4ffe2dafa019 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.915 185393 DEBUG nova.network.neutron [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.932 185393 INFO nova.compute.manager [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Took 3.70 seconds to deallocate network for instance.
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.983 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:42 compute-0 nova_compute[185389]: 2026-01-26 17:25:42.984 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.094 185393 DEBUG nova.compute.provider_tree [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.172 185393 DEBUG nova.scheduler.client.report [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.205 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.253 185393 INFO nova.scheduler.client.report [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Deleted allocations for instance cf6218c0-bc2c-4097-91df-f60657ef7ab1
Jan 26 17:25:43 compute-0 podman[259193]: 2026-01-26 17:25:43.265268495 +0000 UTC m=+0.086957032 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:25:43 compute-0 podman[259192]: 2026-01-26 17:25:43.29752051 +0000 UTC m=+0.124223523 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:25:43 compute-0 podman[259191]: 2026-01-26 17:25:43.297582491 +0000 UTC m=+0.121782216 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git)
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.301 185393 DEBUG nova.network.neutron [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updating instance_info_cache with network_info: [{"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.411 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Releasing lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.412 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Instance network_info: |[{"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.412 185393 DEBUG oslo_concurrency.lockutils [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.412 185393 DEBUG nova.network.neutron [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Refreshing network info cache for port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.415 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Start _get_guest_xml network_info=[{"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.418 185393 DEBUG oslo_concurrency.lockutils [None req-f05a9555-1424-46a7-b489-e21c2cd2b9e8 a04a28d3bd7648abb04b59df0aeee0aa 72e07b00ccf54deaa85258e2c3332b45 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.425 185393 WARNING nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.436 185393 DEBUG nova.virt.libvirt.host [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.436 185393 DEBUG nova.virt.libvirt.host [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.443 185393 DEBUG nova.virt.libvirt.host [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.444 185393 DEBUG nova.virt.libvirt.host [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.444 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.445 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:20:37Z,direct_url=<?>,disk_format='qcow2',id=90acf026-cf3a-409a-999e-35d89bb9a6bf,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aa8f1f3bbce34237a208c8e92ca9286f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:20:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.445 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.445 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.445 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.445 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.446 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.446 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.446 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.446 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.446 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.447 185393 DEBUG nova.virt.hardware [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.450 185393 DEBUG nova.virt.libvirt.vif [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-917096142',display_name='tempest-TestServerBasicOps-server-917096142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-917096142',id=15,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLnRUK9zBgAvu3yXJU++dE0NEBGG03A4ixvTYGetSMnKPRq8hbgY/s2fyfA6dqOPRtRchNZwyumgdS7UYTDOwPIkPJ9G6RXts/fzMbRYDHnBP8r6DSqNiTwsyWZ9Gb+oXw==',key_name='tempest-TestServerBasicOps-1906012598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b66458547a0a47a3bec4b3808c40db40',ramdisk_id='',reservation_id='r-3zylsogo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-755044077',owner_user_name='tempest-TestServerBasicOps-755044077-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:25:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f28bfbc50d234cffbe617e420542c11d',uuid=e14bdaa0-ac4b-4c4a-8036-640cb431e8b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.451 185393 DEBUG nova.network.os_vif_util [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converting VIF {"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.451 185393 DEBUG nova.network.os_vif_util [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.453 185393 DEBUG nova.objects.instance [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lazy-loading 'pci_devices' on Instance uuid e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.470 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <uuid>e14bdaa0-ac4b-4c4a-8036-640cb431e8b7</uuid>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <name>instance-0000000f</name>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:name>tempest-TestServerBasicOps-server-917096142</nova:name>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:25:43</nova:creationTime>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:user uuid="f28bfbc50d234cffbe617e420542c11d">tempest-TestServerBasicOps-755044077-project-member</nova:user>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:project uuid="b66458547a0a47a3bec4b3808c40db40">tempest-TestServerBasicOps-755044077</nova:project>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="90acf026-cf3a-409a-999e-35d89bb9a6bf"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         <nova:port uuid="d92e58d6-ae98-4c68-82c7-6b27e1ed65d9">
Jan 26 17:25:43 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <system>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="serial">e14bdaa0-ac4b-4c4a-8036-640cb431e8b7</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="uuid">e14bdaa0-ac4b-4c4a-8036-640cb431e8b7</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </system>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <os>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </os>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <features>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </features>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.config"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:37:f2:15"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <target dev="tapd92e58d6-ae"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/console.log" append="off"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <video>
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </video>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:25:43 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:25:43 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:25:43 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:25:43 compute-0 nova_compute[185389]: </domain>
Jan 26 17:25:43 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.470 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Preparing to wait for external event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.471 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.471 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.472 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.472 185393 DEBUG nova.virt.libvirt.vif [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-917096142',display_name='tempest-TestServerBasicOps-server-917096142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-917096142',id=15,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLnRUK9zBgAvu3yXJU++dE0NEBGG03A4ixvTYGetSMnKPRq8hbgY/s2fyfA6dqOPRtRchNZwyumgdS7UYTDOwPIkPJ9G6RXts/fzMbRYDHnBP8r6DSqNiTwsyWZ9Gb+oXw==',key_name='tempest-TestServerBasicOps-1906012598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b66458547a0a47a3bec4b3808c40db40',ramdisk_id='',reservation_id='r-3zylsogo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-755044077',owner_user_name='tempest-TestServerBasicOps-755044077-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:25:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f28bfbc50d234cffbe617e420542c11d',uuid=e14bdaa0-ac4b-4c4a-8036-640cb431e8b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.473 185393 DEBUG nova.network.os_vif_util [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converting VIF {"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.473 185393 DEBUG nova.network.os_vif_util [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.474 185393 DEBUG os_vif [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.474 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.475 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.475 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.478 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.479 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd92e58d6-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.479 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd92e58d6-ae, col_values=(('external_ids', {'iface-id': 'd92e58d6-ae98-4c68-82c7-6b27e1ed65d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:f2:15', 'vm-uuid': 'e14bdaa0-ac4b-4c4a-8036-640cb431e8b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.483 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:25:43 compute-0 NetworkManager[56253]: <info>  [1769448343.4847] manager: (tapd92e58d6-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.489 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.490 185393 INFO os_vif [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae')
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.701 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.701 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.701 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] No VIF found with MAC fa:16:3e:37:f2:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:25:43 compute-0 nova_compute[185389]: 2026-01-26 17:25:43.714 185393 INFO nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Using config drive
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.168 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.218 185393 INFO nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Creating config drive at /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.config
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.226 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgczshwf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.356 185393 DEBUG oslo_concurrency.processutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgczshwf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:44 compute-0 kernel: tapd92e58d6-ae: entered promiscuous mode
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.448 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 ovn_controller[97699]: 2026-01-26T17:25:44Z|00155|binding|INFO|Claiming lport d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 for this chassis.
Jan 26 17:25:44 compute-0 ovn_controller[97699]: 2026-01-26T17:25:44Z|00156|binding|INFO|d92e58d6-ae98-4c68-82c7-6b27e1ed65d9: Claiming fa:16:3e:37:f2:15 10.100.0.3
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.4524] manager: (tapd92e58d6-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.467 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:15 10.100.0.3'], port_security=['fa:16:3e:37:f2:15 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e14bdaa0-ac4b-4c4a-8036-640cb431e8b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-132f3e5b-f2c7-4516-a253-7e99f4460896', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b66458547a0a47a3bec4b3808c40db40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a4d9b71-d82e-4d55-bedb-b6fa13fe31be ed11cbd8-8e48-40fa-a512-f7d754992027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e040d7fa-2bc0-4c36-a18d-e3df5ed3586c, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.469 106955 INFO neutron.agent.ovn.metadata.agent [-] Port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 in datapath 132f3e5b-f2c7-4516-a253-7e99f4460896 bound to our chassis
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.469 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 ovn_controller[97699]: 2026-01-26T17:25:44Z|00157|binding|INFO|Setting lport d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 ovn-installed in OVS
Jan 26 17:25:44 compute-0 ovn_controller[97699]: 2026-01-26T17:25:44Z|00158|binding|INFO|Setting lport d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 up in Southbound
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.472 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 132f3e5b-f2c7-4516-a253-7e99f4460896
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.471 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.479 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.495 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac7c836-b19e-49b7-973d-7b76ba32f933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.497 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap132f3e5b-f1 in ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.500 238734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap132f3e5b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.500 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[fe91d1e9-4d24-4466-9c7f-471fb498a5f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.502 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f7f7e5-6fea-442a-ab3b-d6f369b6802b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 systemd-udevd[259278]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.5166] device (tapd92e58d6-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.5215] device (tapd92e58d6-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:25:44 compute-0 systemd-machined[156679]: New machine qemu-16-instance-0000000f.
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.529 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fcbaa4-63c1-46ce-ac92-6ee8cf1bc551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Jan 26 17:25:44 compute-0 podman[259264]: 2026-01-26 17:25:44.55666575 +0000 UTC m=+0.116868213 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.557 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[702f01ab-9ce4-45b6-9351-f052f21fa842]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.593 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[38f0077e-ffb0-4e23-8ccd-19d825fef2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.6031] manager: (tap132f3e5b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.601 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e6d852-6073-4363-a6b5-b31d7953b123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.659 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[82fa4fc5-4dae-4478-ace4-1abc90f4953d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.667 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1deac6d-70eb-4f83-8d34-e3fc83abdd13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.7042] device (tap132f3e5b-f0): carrier: link connected
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.726 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[8987b197-0685-4726-9201-1b34e70e925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.745 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a76abcdc-99fc-4611-a418-67231f9edf5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap132f3e5b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:fb:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699320, 'reachable_time': 34201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259325, 'error': None, 'target': 'ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.763 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a966b59e-261d-47ad-82ee-8844fd4f3499]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:fbeb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699320, 'tstamp': 699320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259326, 'error': None, 'target': 'ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.780 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[fb54b5b9-b59c-4dad-8c50-516bf60de4cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap132f3e5b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:fb:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699320, 'reachable_time': 34201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259327, 'error': None, 'target': 'ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.815 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[f13f498d-04cd-44e5-86c8-ff17fc398c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.842 185393 DEBUG nova.compute.manager [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.843 185393 DEBUG oslo_concurrency.lockutils [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.843 185393 DEBUG oslo_concurrency.lockutils [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.844 185393 DEBUG oslo_concurrency.lockutils [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "cf6218c0-bc2c-4097-91df-f60657ef7ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.844 185393 DEBUG nova.compute.manager [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] No waiting events found dispatching network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.845 185393 WARNING nova.compute.manager [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received unexpected event network-vif-plugged-994f4b51-014f-469e-9096-4ffe2dafa019 for instance with vm_state deleted and task_state None.
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.845 185393 DEBUG nova.compute.manager [req-37b082c8-4eef-424a-86b7-60cf94634ee2 req-48cc7a7e-9776-461e-b074-340202209367 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Received event network-vif-deleted-994f4b51-014f-469e-9096-4ffe2dafa019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.895 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[cb271ba5-dbd3-4b6a-a871-c9b21a45a851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.897 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132f3e5b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.897 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.898 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132f3e5b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:44 compute-0 kernel: tap132f3e5b-f0: entered promiscuous mode
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.900 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 NetworkManager[56253]: <info>  [1769448344.9013] manager: (tap132f3e5b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.905 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap132f3e5b-f0, col_values=(('external_ids', {'iface-id': 'd9ab23aa-d039-4f5e-9e36-3a5b71cdfc53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.906 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 ovn_controller[97699]: 2026-01-26T17:25:44Z|00159|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.908 106955 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/132f3e5b-f2c7-4516-a253-7e99f4460896.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/132f3e5b-f2c7-4516-a253-7e99f4460896.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.909 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b54f1e-0a3a-4a45-928b-50654b97f6b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.910 106955 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: global
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     log         /dev/log local0 debug
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     log-tag     haproxy-metadata-proxy-132f3e5b-f2c7-4516-a253-7e99f4460896
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     user        root
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     group       root
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     maxconn     1024
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     pidfile     /var/lib/neutron/external/pids/132f3e5b-f2c7-4516-a253-7e99f4460896.pid.haproxy
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     daemon
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: defaults
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     log global
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     mode http
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     option httplog
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     option dontlognull
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     option http-server-close
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     option forwardfor
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     retries                 3
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     timeout http-request    30s
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     timeout connect         30s
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     timeout client          32s
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     timeout server          32s
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     timeout http-keep-alive 30s
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: listen listener
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     bind 169.254.169.254:80
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:     http-request add-header X-OVN-Network-ID 132f3e5b-f2c7-4516-a253-7e99f4460896
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 17:25:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:25:44.911 106955 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896', 'env', 'PROCESS_TAG=haproxy-132f3e5b-f2c7-4516-a253-7e99f4460896', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/132f3e5b-f2c7-4516-a253-7e99f4460896.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 17:25:44 compute-0 nova_compute[185389]: 2026-01-26 17:25:44.922 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.070 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448345.069727, e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.070 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] VM Started (Lifecycle Event)
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.099 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.105 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448345.0698369, e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.105 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] VM Paused (Lifecycle Event)
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.128 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.134 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.162 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:25:45 compute-0 podman[259365]: 2026-01-26 17:25:45.352543355 +0000 UTC m=+0.066392564 container create 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 17:25:45 compute-0 systemd[1]: Started libpod-conmon-7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c.scope.
Jan 26 17:25:45 compute-0 podman[259365]: 2026-01-26 17:25:45.321830831 +0000 UTC m=+0.035680060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 17:25:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10191dcc3693201a1ee02b28cd2874dd6a47e017bbe7cd385c44010ed1d0f3ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:45 compute-0 podman[259365]: 2026-01-26 17:25:45.454523153 +0000 UTC m=+0.168372382 container init 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 17:25:45 compute-0 podman[259365]: 2026-01-26 17:25:45.464427462 +0000 UTC m=+0.178276681 container start 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:25:45 compute-0 podman[259377]: 2026-01-26 17:25:45.487765365 +0000 UTC m=+0.093527909 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 17:25:45 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [NOTICE]   (259399) : New worker (259403) forked
Jan 26 17:25:45 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [NOTICE]   (259399) : Loading success.
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.698 185393 DEBUG nova.network.neutron [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updated VIF entry in instance network info cache for port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.698 185393 DEBUG nova.network.neutron [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updating instance_info_cache with network_info: [{"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.751 185393 DEBUG oslo_concurrency.lockutils [req-e393f003-1b23-48e3-8ff4-7f4b3d54d1e9 req-a400be28-f677-44a2-8587-fdeed69cb0b1 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:25:45 compute-0 nova_compute[185389]: 2026-01-26 17:25:45.976 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.952 185393 DEBUG nova.compute.manager [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.954 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.954 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.955 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.955 185393 DEBUG nova.compute.manager [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Processing event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.956 185393 DEBUG nova.compute.manager [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.956 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.957 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.957 185393 DEBUG oslo_concurrency.lockutils [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.958 185393 DEBUG nova.compute.manager [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] No waiting events found dispatching network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.958 185393 WARNING nova.compute.manager [req-732bdedb-b0b4-4e7f-ae02-5e37c0f235b0 req-1fc6ef1c-80c2-41d5-b3cd-aeca0e28751c 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received unexpected event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 for instance with vm_state building and task_state spawning.
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.959 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.965 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448346.9651284, e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.966 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] VM Resumed (Lifecycle Event)
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.968 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.973 185393 INFO nova.virt.libvirt.driver [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Instance spawned successfully.
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.973 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.988 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:46 compute-0 nova_compute[185389]: 2026-01-26 17:25:46.998 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.002 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.003 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.004 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.005 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.005 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.006 185393 DEBUG nova.virt.libvirt.driver [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.038 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.105 185393 INFO nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Took 11.36 seconds to spawn the instance on the hypervisor.
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.105 185393 DEBUG nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.163 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448332.1626139, a7263205-e4bb-4bdd-bdf4-a91586c033c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.165 185393 INFO nova.compute.manager [-] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] VM Stopped (Lifecycle Event)
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.218 185393 INFO nova.compute.manager [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Took 11.95 seconds to build instance.
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.261 185393 DEBUG nova.compute.manager [None req-3923ef01-a55a-48ab-a0ec-de5b6f08ecad - - - - - -] [instance: a7263205-e4bb-4bdd-bdf4-a91586c033c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:47 compute-0 nova_compute[185389]: 2026-01-26 17:25:47.270 185393 DEBUG oslo_concurrency.lockutils [None req-23ca54be-8396-4702-91e6-7b6611f64636 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:48 compute-0 nova_compute[185389]: 2026-01-26 17:25:48.482 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:50 compute-0 ovn_controller[97699]: 2026-01-26T17:25:50Z|00160|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:25:50 compute-0 ovn_controller[97699]: 2026-01-26T17:25:50Z|00161|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:25:50 compute-0 nova_compute[185389]: 2026-01-26 17:25:50.191 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:50 compute-0 podman[259414]: 2026-01-26 17:25:50.202249594 +0000 UTC m=+0.088530324 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:25:50 compute-0 podman[259415]: 2026-01-26 17:25:50.208191835 +0000 UTC m=+0.088420161 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Jan 26 17:25:50 compute-0 ovn_controller[97699]: 2026-01-26T17:25:50Z|00162|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:25:50 compute-0 ovn_controller[97699]: 2026-01-26T17:25:50Z|00163|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:25:50 compute-0 podman[259413]: 2026-01-26 17:25:50.459194609 +0000 UTC m=+0.344964595 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 17:25:50 compute-0 nova_compute[185389]: 2026-01-26 17:25:50.468 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:50 compute-0 nova_compute[185389]: 2026-01-26 17:25:50.979 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:53 compute-0 nova_compute[185389]: 2026-01-26 17:25:53.485 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:53 compute-0 nova_compute[185389]: 2026-01-26 17:25:53.547 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:53 compute-0 NetworkManager[56253]: <info>  [1769448353.5517] manager: (patch-provnet-10704259-5999-4b8c-a177-c158eb08b0dd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Jan 26 17:25:53 compute-0 NetworkManager[56253]: <info>  [1769448353.5535] manager: (patch-br-int-to-provnet-10704259-5999-4b8c-a177-c158eb08b0dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Jan 26 17:25:53 compute-0 nova_compute[185389]: 2026-01-26 17:25:53.749 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:53 compute-0 ovn_controller[97699]: 2026-01-26T17:25:53Z|00164|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:25:53 compute-0 ovn_controller[97699]: 2026-01-26T17:25:53Z|00165|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:25:53 compute-0 nova_compute[185389]: 2026-01-26 17:25:53.775 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:54 compute-0 nova_compute[185389]: 2026-01-26 17:25:54.137 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448339.1364577, cf6218c0-bc2c-4097-91df-f60657ef7ab1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:25:54 compute-0 nova_compute[185389]: 2026-01-26 17:25:54.138 185393 INFO nova.compute.manager [-] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] VM Stopped (Lifecycle Event)
Jan 26 17:25:54 compute-0 nova_compute[185389]: 2026-01-26 17:25:54.251 185393 DEBUG nova.compute.manager [None req-9616be45-9662-4a0c-aaaf-3f364116ab38 - - - - - -] [instance: cf6218c0-bc2c-4097-91df-f60657ef7ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.374 185393 DEBUG nova.compute.manager [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-changed-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.375 185393 DEBUG nova.compute.manager [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Refreshing instance network info cache due to event network-changed-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.376 185393 DEBUG oslo_concurrency.lockutils [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.376 185393 DEBUG oslo_concurrency.lockutils [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.377 185393 DEBUG nova.network.neutron [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Refreshing network info cache for port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:25:55 compute-0 nova_compute[185389]: 2026-01-26 17:25:55.981 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:58 compute-0 nova_compute[185389]: 2026-01-26 17:25:58.488 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:59 compute-0 nova_compute[185389]: 2026-01-26 17:25:59.016 185393 DEBUG nova.network.neutron [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updated VIF entry in instance network info cache for port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:25:59 compute-0 nova_compute[185389]: 2026-01-26 17:25:59.017 185393 DEBUG nova.network.neutron [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updating instance_info_cache with network_info: [{"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:25:59 compute-0 nova_compute[185389]: 2026-01-26 17:25:59.264 185393 DEBUG oslo_concurrency.lockutils [req-20af0b72-1288-40c6-9418-73105933653a req-4b9fc0bb-880b-45e1-bac4-9da489f38031 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:25:59 compute-0 podman[201244]: time="2026-01-26T17:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:25:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:25:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4843 "" "Go-http-client/1.1"
Jan 26 17:26:00 compute-0 nova_compute[185389]: 2026-01-26 17:26:00.984 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:01 compute-0 openstack_network_exporter[204387]: ERROR   17:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:26:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:26:01 compute-0 openstack_network_exporter[204387]: ERROR   17:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:26:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:01.781 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:01.782 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:01.784 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:03 compute-0 nova_compute[185389]: 2026-01-26 17:26:03.492 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:05 compute-0 nova_compute[185389]: 2026-01-26 17:26:05.985 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:08 compute-0 nova_compute[185389]: 2026-01-26 17:26:08.496 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:09 compute-0 nova_compute[185389]: 2026-01-26 17:26:09.087 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:10 compute-0 nova_compute[185389]: 2026-01-26 17:26:10.988 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:13 compute-0 nova_compute[185389]: 2026-01-26 17:26:13.500 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:14 compute-0 podman[259479]: 2026-01-26 17:26:14.22541008 +0000 UTC m=+0.097306333 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 26 17:26:14 compute-0 podman[259481]: 2026-01-26 17:26:14.230513587 +0000 UTC m=+0.097485246 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:26:14 compute-0 podman[259480]: 2026-01-26 17:26:14.233556791 +0000 UTC m=+0.107411548 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 26 17:26:14 compute-0 nova_compute[185389]: 2026-01-26 17:26:14.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:14 compute-0 nova_compute[185389]: 2026-01-26 17:26:14.769 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:14 compute-0 nova_compute[185389]: 2026-01-26 17:26:14.801 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:14 compute-0 podman[259540]: 2026-01-26 17:26:14.826254229 +0000 UTC m=+0.138114149 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:26:15 compute-0 nova_compute[185389]: 2026-01-26 17:26:15.993 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:16 compute-0 podman[259565]: 2026-01-26 17:26:16.208187923 +0000 UTC m=+0.097267201 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:26:16 compute-0 nova_compute[185389]: 2026-01-26 17:26:16.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:16 compute-0 nova_compute[185389]: 2026-01-26 17:26:16.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.503 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.973 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.973 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.973 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:26:18 compute-0 nova_compute[185389]: 2026-01-26 17:26:18.973 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:26:19 compute-0 nova_compute[185389]: 2026-01-26 17:26:19.294 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:20 compute-0 nova_compute[185389]: 2026-01-26 17:26:20.996 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:21 compute-0 podman[259585]: 2026-01-26 17:26:21.214464893 +0000 UTC m=+0.098231789 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 26 17:26:21 compute-0 podman[259584]: 2026-01-26 17:26:21.222937953 +0000 UTC m=+0.110752298 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:26:21 compute-0 podman[259586]: 2026-01-26 17:26:21.236324925 +0000 UTC m=+0.116245586 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.expose-services=)
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.293 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.310 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.310 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.311 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.311 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:22 compute-0 ovn_controller[97699]: 2026-01-26T17:26:22Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:f2:15 10.100.0.3
Jan 26 17:26:22 compute-0 ovn_controller[97699]: 2026-01-26T17:26:22Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:f2:15 10.100.0.3
Jan 26 17:26:22 compute-0 nova_compute[185389]: 2026-01-26 17:26:22.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:23 compute-0 nova_compute[185389]: 2026-01-26 17:26:23.507 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:25.999 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:26 compute-0 ovn_controller[97699]: 2026-01-26T17:26:26Z|00166|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:26:26 compute-0 ovn_controller[97699]: 2026-01-26T17:26:26Z|00167|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.179 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.885 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.886 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.886 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.886 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:26:26 compute-0 nova_compute[185389]: 2026-01-26 17:26:26.986 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.058 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.060 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.125 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.133 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.193 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.195 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.263 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.657 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.658 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4964MB free_disk=72.27888107299805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.659 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.659 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.739 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.740 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.740 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.741 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.802 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.818 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.841 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:26:27 compute-0 nova_compute[185389]: 2026-01-26 17:26:27.842 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:28 compute-0 nova_compute[185389]: 2026-01-26 17:26:28.511 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:28 compute-0 nova_compute[185389]: 2026-01-26 17:26:28.605 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:29 compute-0 podman[201244]: time="2026-01-26T17:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:26:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:26:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4854 "" "Go-http-client/1.1"
Jan 26 17:26:31 compute-0 nova_compute[185389]: 2026-01-26 17:26:31.003 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.357 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.358 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.360 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.365 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:26:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:31.369 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:26:31 compute-0 openstack_network_exporter[204387]: ERROR   17:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:26:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:26:31 compute-0 openstack_network_exporter[204387]: ERROR   17:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:26:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.350 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2084 Content-Type: application/json Date: Mon, 26 Jan 2026 17:26:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-54d01ff5-cc06-46bf-b49c-e70fd82a0f33 x-openstack-request-id: req-54d01ff5-cc06-46bf-b49c-e70fd82a0f33 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.351 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7", "name": "tempest-TestServerBasicOps-server-917096142", "status": "ACTIVE", "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "user_id": "f28bfbc50d234cffbe617e420542c11d", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "0aaa280a6239b4cbf2f5a68434f75e91106487e56018b2831ffb1ac1", "image": {"id": "90acf026-cf3a-409a-999e-35d89bb9a6bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/90acf026-cf3a-409a-999e-35d89bb9a6bf"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:25:33Z", "updated": "2026-01-26T17:25:47Z", "addresses": {"tempest-TestServerBasicOps-1043248803-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:37:f2:15"}, {"version": 4, "addr": "192.168.122.248", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:37:f2:15"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1906012598", "OS-SRV-USG:launched_at": "2026-01-26T17:25:47.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1708724301"}, {"name": "tempest-securitygroup--1507206639"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.351 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 used request id req-54d01ff5-cc06-46bf-b49c-e70fd82a0f33 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.353 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e14bdaa0-ac4b-4c4a-8036-640cb431e8b7', 'name': 'tempest-TestServerBasicOps-server-917096142', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '90acf026-cf3a-409a-999e-35d89bb9a6bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b66458547a0a47a3bec4b3808c40db40', 'user_id': 'f28bfbc50d234cffbe617e420542c11d', 'hostId': '0aaa280a6239b4cbf2f5a68434f75e91106487e56018b2831ffb1ac1', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.356 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:26:32.357510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.399 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.bytes volume: 72916992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.400 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.435 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.436 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:26:32.438267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.439 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.latency volume: 4001993817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.439 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.439 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 16986439843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.440 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:26:32.441550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.442 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.442 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.442 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.443 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:26:32.444766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.449 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 / tapd92e58d6-ae inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.449 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.452 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 1172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.453 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T17:26:32.454654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.455 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.455 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-917096142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-917096142>]
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.456 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:26:32.457313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.479 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/cpu volume: 35740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.497 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 114910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:26:32.499400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.500 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.500 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:26:32.502373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:26:32.504503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.505 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.505 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:26:32.507179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.507 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.508 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:26:32.509898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:26:32.512026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.512 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.513 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.514 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:26:32.514652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.515 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.515 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.516 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T17:26:32.517422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.518 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.518 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-917096142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-917096142>]
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.519 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:26:32.519775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.520 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.520 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.521 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:26:32.522574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.523 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/memory.usage volume: 46.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.523 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 43.37890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:26:32.525470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.526 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.526 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:26:32.528327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.529 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.529 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:26:32.531152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.531 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.532 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:26:32.533814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.548 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.548 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.562 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.563 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:26:32.565366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.566 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.566 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:26:32.569189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.569 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.bytes volume: 30525952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.570 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.570 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 29129728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.570 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:26:32.573252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.573 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.latency volume: 564118043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.574 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.latency volume: 61604577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.574 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 440552323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.574 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 54239181 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.575 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:26:32.576577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.577 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.577 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:26:32.579305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.579 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.requests volume: 1105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.580 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.580 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.580 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:26:32.582576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.583 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.583 14 DEBUG ceilometer.compute.pollsters [-] e14bdaa0-ac4b-4c4a-8036-640cb431e8b7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.583 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.584 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:26:32.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:26:33 compute-0 nova_compute[185389]: 2026-01-26 17:26:33.245 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:33 compute-0 nova_compute[185389]: 2026-01-26 17:26:33.514 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:34 compute-0 nova_compute[185389]: 2026-01-26 17:26:34.426 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:34 compute-0 nova_compute[185389]: 2026-01-26 17:26:34.837 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:34 compute-0 nova_compute[185389]: 2026-01-26 17:26:34.838 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:36 compute-0 nova_compute[185389]: 2026-01-26 17:26:36.006 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:37.450 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:26:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:37.451 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:26:37 compute-0 nova_compute[185389]: 2026-01-26 17:26:37.453 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:38 compute-0 nova_compute[185389]: 2026-01-26 17:26:38.518 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:40 compute-0 nova_compute[185389]: 2026-01-26 17:26:40.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:41 compute-0 nova_compute[185389]: 2026-01-26 17:26:41.009 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:43 compute-0 nova_compute[185389]: 2026-01-26 17:26:43.523 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:44 compute-0 podman[259672]: 2026-01-26 17:26:44.79825553 +0000 UTC m=+0.083808846 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:26:44 compute-0 podman[259670]: 2026-01-26 17:26:44.80930318 +0000 UTC m=+0.109323769 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, distribution-scope=public, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:26:44 compute-0 podman[259671]: 2026-01-26 17:26:44.83728105 +0000 UTC m=+0.136685702 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:26:44 compute-0 podman[259732]: 2026-01-26 17:26:44.975860732 +0000 UTC m=+0.092537803 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:26:46 compute-0 nova_compute[185389]: 2026-01-26 17:26:46.012 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:46 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:26:46.454 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:26:47 compute-0 podman[259756]: 2026-01-26 17:26:47.171423571 +0000 UTC m=+0.061120070 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 26 17:26:48 compute-0 nova_compute[185389]: 2026-01-26 17:26:48.324 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:48 compute-0 nova_compute[185389]: 2026-01-26 17:26:48.527 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:48 compute-0 ovn_controller[97699]: 2026-01-26T17:26:48Z|00168|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:26:48 compute-0 ovn_controller[97699]: 2026-01-26T17:26:48Z|00169|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:26:48 compute-0 nova_compute[185389]: 2026-01-26 17:26:48.875 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:51 compute-0 nova_compute[185389]: 2026-01-26 17:26:51.016 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:52 compute-0 podman[259778]: 2026-01-26 17:26:52.222442515 +0000 UTC m=+0.082587143 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release-0.7.12=)
Jan 26 17:26:52 compute-0 podman[259777]: 2026-01-26 17:26:52.23844914 +0000 UTC m=+0.111460387 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:26:52 compute-0 podman[259776]: 2026-01-26 17:26:52.249861539 +0000 UTC m=+0.126501554 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 26 17:26:53 compute-0 ovn_controller[97699]: 2026-01-26T17:26:53Z|00170|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:26:53 compute-0 ovn_controller[97699]: 2026-01-26T17:26:53Z|00171|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:26:53 compute-0 nova_compute[185389]: 2026-01-26 17:26:53.295 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:53 compute-0 nova_compute[185389]: 2026-01-26 17:26:53.529 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:56 compute-0 nova_compute[185389]: 2026-01-26 17:26:56.019 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:58 compute-0 nova_compute[185389]: 2026-01-26 17:26:58.531 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:59 compute-0 podman[201244]: time="2026-01-26T17:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:26:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 26 17:26:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4850 "" "Go-http-client/1.1"
Jan 26 17:27:01 compute-0 nova_compute[185389]: 2026-01-26 17:27:01.021 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:01 compute-0 openstack_network_exporter[204387]: ERROR   17:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:27:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:27:01 compute-0 openstack_network_exporter[204387]: ERROR   17:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:27:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:01.783 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:01.784 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:01.792 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:03.253 107338 DEBUG eventlet.wsgi.server [-] (107338) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:03.256 107338 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: Accept: */*
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: Connection: close
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: Content-Type: text/plain
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: Host: 169.254.169.254
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: User-Agent: curl/7.84.0
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: X-Forwarded-For: 10.100.0.3
Jan 26 17:27:03 compute-0 ovn_metadata_agent[106950]: X-Ovn-Network-Id: 132f3e5b-f2c7-4516-a253-7e99f4460896 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 26 17:27:03 compute-0 nova_compute[185389]: 2026-01-26 17:27:03.535 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:04 compute-0 ovn_controller[97699]: 2026-01-26T17:27:04Z|00172|binding|INFO|Releasing lport d9ab23aa-d039-4f5e-9e36-3a5b71cdfc53 from this chassis (sb_readonly=0)
Jan 26 17:27:04 compute-0 ovn_controller[97699]: 2026-01-26T17:27:04Z|00173|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:27:04 compute-0 nova_compute[185389]: 2026-01-26 17:27:04.400 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.015 107338 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.016 107338 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.7605083
Jan 26 17:27:06 compute-0 haproxy-metadata-proxy-132f3e5b-f2c7-4516-a253-7e99f4460896[259403]: 10.100.0.3:40920 [26/Jan/2026:17:27:03.251] listener listener/metadata 0/0/0/2765/2765 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 26 17:27:06 compute-0 nova_compute[185389]: 2026-01-26 17:27:06.023 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.105 107338 DEBUG eventlet.wsgi.server [-] (107338) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.108 107338 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: Accept: */*
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: Connection: close
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: Content-Length: 100
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: Content-Type: application/x-www-form-urlencoded
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: Host: 169.254.169.254
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: User-Agent: curl/7.84.0
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: X-Forwarded-For: 10.100.0.3
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: X-Ovn-Network-Id: 132f3e5b-f2c7-4516-a253-7e99f4460896
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.457 107338 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 26 17:27:06 compute-0 haproxy-metadata-proxy-132f3e5b-f2c7-4516-a253-7e99f4460896[259403]: 10.100.0.3:40932 [26/Jan/2026:17:27:06.103] listener listener/metadata 0/0/0/354/354 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 26 17:27:06 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:06.458 107338 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3505459
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.486 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.488 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.488 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.489 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.489 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.491 185393 INFO nova.compute.manager [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Terminating instance
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.492 185393 DEBUG nova.compute.manager [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.540 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:08 compute-0 kernel: tapd92e58d6-ae (unregistering): left promiscuous mode
Jan 26 17:27:08 compute-0 NetworkManager[56253]: <info>  [1769448428.8242] device (tapd92e58d6-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:27:08 compute-0 ovn_controller[97699]: 2026-01-26T17:27:08Z|00174|binding|INFO|Releasing lport d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 from this chassis (sb_readonly=0)
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.843 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:08 compute-0 ovn_controller[97699]: 2026-01-26T17:27:08Z|00175|binding|INFO|Setting lport d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 down in Southbound
Jan 26 17:27:08 compute-0 ovn_controller[97699]: 2026-01-26T17:27:08Z|00176|binding|INFO|Removing iface tapd92e58d6-ae ovn-installed in OVS
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.854 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:08.861 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:15 10.100.0.3'], port_security=['fa:16:3e:37:f2:15 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e14bdaa0-ac4b-4c4a-8036-640cb431e8b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-132f3e5b-f2c7-4516-a253-7e99f4460896', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b66458547a0a47a3bec4b3808c40db40', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a4d9b71-d82e-4d55-bedb-b6fa13fe31be ed11cbd8-8e48-40fa-a512-f7d754992027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e040d7fa-2bc0-4c36-a18d-e3df5ed3586c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:27:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:08.862 106955 INFO neutron.agent.ovn.metadata.agent [-] Port d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 in datapath 132f3e5b-f2c7-4516-a253-7e99f4460896 unbound from our chassis
Jan 26 17:27:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:08.864 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 132f3e5b-f2c7-4516-a253-7e99f4460896, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:27:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:08.867 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6bf8fc-9771-4f60-8f89-b89400ef749d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:08 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:08.868 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896 namespace which is not needed anymore
Jan 26 17:27:08 compute-0 nova_compute[185389]: 2026-01-26 17:27:08.879 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:08 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 26 17:27:08 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 43.880s CPU time.
Jan 26 17:27:08 compute-0 systemd-machined[156679]: Machine qemu-16-instance-0000000f terminated.
Jan 26 17:27:09 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [NOTICE]   (259399) : haproxy version is 2.8.14-c23fe91
Jan 26 17:27:09 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [NOTICE]   (259399) : path to executable is /usr/sbin/haproxy
Jan 26 17:27:09 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [ALERT]    (259399) : Current worker (259403) exited with code 143 (Terminated)
Jan 26 17:27:09 compute-0 neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896[259385]: [WARNING]  (259399) : All workers exited. Exiting... (0)
Jan 26 17:27:09 compute-0 systemd[1]: libpod-7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c.scope: Deactivated successfully.
Jan 26 17:27:09 compute-0 conmon[259385]: conmon 7e0b1f1218d7f5cdebb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c.scope/container/memory.events
Jan 26 17:27:09 compute-0 podman[259854]: 2026-01-26 17:27:09.051150243 +0000 UTC m=+0.066737993 container died 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c-userdata-shm.mount: Deactivated successfully.
Jan 26 17:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-10191dcc3693201a1ee02b28cd2874dd6a47e017bbe7cd385c44010ed1d0f3ee-merged.mount: Deactivated successfully.
Jan 26 17:27:09 compute-0 podman[259854]: 2026-01-26 17:27:09.104788899 +0000 UTC m=+0.120376649 container cleanup 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:27:09 compute-0 systemd[1]: libpod-conmon-7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c.scope: Deactivated successfully.
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.169 185393 INFO nova.virt.libvirt.driver [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Instance destroyed successfully.
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.171 185393 DEBUG nova.objects.instance [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lazy-loading 'resources' on Instance uuid e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.184 185393 DEBUG nova.virt.libvirt.vif [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-917096142',display_name='tempest-TestServerBasicOps-server-917096142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-917096142',id=15,image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLnRUK9zBgAvu3yXJU++dE0NEBGG03A4ixvTYGetSMnKPRq8hbgY/s2fyfA6dqOPRtRchNZwyumgdS7UYTDOwPIkPJ9G6RXts/fzMbRYDHnBP8r6DSqNiTwsyWZ9Gb+oXw==',key_name='tempest-TestServerBasicOps-1906012598',keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:25:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b66458547a0a47a3bec4b3808c40db40',ramdisk_id='',reservation_id='r-3zylsogo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='90acf026-cf3a-409a-999e-35d89bb9a6bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-755044077',owner_user_name='tempest-TestServerBasicOps-755044077-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:27:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f28bfbc50d234cffbe617e420542c11d',uuid=e14bdaa0-ac4b-4c4a-8036-640cb431e8b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.185 185393 DEBUG nova.network.os_vif_util [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converting VIF {"id": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "address": "fa:16:3e:37:f2:15", "network": {"id": "132f3e5b-f2c7-4516-a253-7e99f4460896", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1043248803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b66458547a0a47a3bec4b3808c40db40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd92e58d6-ae", "ovs_interfaceid": "d92e58d6-ae98-4c68-82c7-6b27e1ed65d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.186 185393 DEBUG nova.network.os_vif_util [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.186 185393 DEBUG os_vif [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.188 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.189 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd92e58d6-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.191 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.194 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.197 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.201 185393 INFO os_vif [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:f2:15,bridge_name='br-int',has_traffic_filtering=True,id=d92e58d6-ae98-4c68-82c7-6b27e1ed65d9,network=Network(132f3e5b-f2c7-4516-a253-7e99f4460896),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd92e58d6-ae')
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.202 185393 INFO nova.virt.libvirt.driver [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Deleting instance files /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7_del
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.203 185393 INFO nova.virt.libvirt.driver [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Deletion of /var/lib/nova/instances/e14bdaa0-ac4b-4c4a-8036-640cb431e8b7_del complete
Jan 26 17:27:09 compute-0 podman[259888]: 2026-01-26 17:27:09.225692381 +0000 UTC m=+0.078074491 container remove 7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.233 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[df77914a-2beb-442c-9ee7-19ea807bf7be]: (4, ('Mon Jan 26 05:27:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896 (7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c)\n7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c\nMon Jan 26 05:27:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896 (7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c)\n7e0b1f1218d7f5cdebb4e45575ec8c2ef7bb994aa534ab1cf5a032336520871c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.236 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[36e3d919-27de-48d0-a3e9-5cb726447788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.238 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132f3e5b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.240 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 kernel: tap132f3e5b-f0: left promiscuous mode
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.258 185393 DEBUG nova.compute.manager [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-unplugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.258 185393 DEBUG oslo_concurrency.lockutils [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.259 185393 DEBUG oslo_concurrency.lockutils [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.259 185393 DEBUG oslo_concurrency.lockutils [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.260 185393 DEBUG nova.compute.manager [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] No waiting events found dispatching network-vif-unplugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.260 185393 DEBUG nova.compute.manager [req-c1e0a14a-e80f-45a8-8417-6bc1bf7b3d87 req-9a8583d1-0219-424f-89a6-eda87b4ed2c3 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-unplugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.262 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.267 185393 INFO nova.compute.manager [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.268 185393 DEBUG oslo.service.loopingcall [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.268 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.269 185393 DEBUG nova.compute.manager [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:27:09 compute-0 nova_compute[185389]: 2026-01-26 17:27:09.270 185393 DEBUG nova.network.neutron [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.275 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1c94df-ad15-4d44-82c8-8ed8dc559815]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.291 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8a1ad4-7d46-48ec-a71d-caf10bed3548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.294 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[419e7560-30b6-4cc4-bfe0-5bdee274fb86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.314 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2c135894-1afe-4ef3-b403-12df26525cb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699308, 'reachable_time': 21890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259913, 'error': None, 'target': 'ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.318 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-132f3e5b-f2c7-4516-a253-7e99f4460896 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:27:09 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:09.319 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[de931a41-90f5-41c7-9fb8-709c28ff4dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d132f3e5b\x2df2c7\x2d4516\x2da253\x2d7e99f4460896.mount: Deactivated successfully.
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.341 185393 DEBUG nova.network.neutron [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.359 185393 INFO nova.compute.manager [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Took 1.09 seconds to deallocate network for instance.
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.421 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.422 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.531 185393 DEBUG nova.compute.provider_tree [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.550 185393 DEBUG nova.scheduler.client.report [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.742 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.823 185393 INFO nova.scheduler.client.report [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Deleted allocations for instance e14bdaa0-ac4b-4c4a-8036-640cb431e8b7
Jan 26 17:27:10 compute-0 nova_compute[185389]: 2026-01-26 17:27:10.890 185393 DEBUG oslo_concurrency.lockutils [None req-889dfd89-22c8-4ce0-ac4e-7c850de90c06 f28bfbc50d234cffbe617e420542c11d b66458547a0a47a3bec4b3808c40db40 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.025 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.384 185393 DEBUG nova.compute.manager [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.384 185393 DEBUG oslo_concurrency.lockutils [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.385 185393 DEBUG oslo_concurrency.lockutils [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.385 185393 DEBUG oslo_concurrency.lockutils [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e14bdaa0-ac4b-4c4a-8036-640cb431e8b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.385 185393 DEBUG nova.compute.manager [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] No waiting events found dispatching network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.386 185393 WARNING nova.compute.manager [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received unexpected event network-vif-plugged-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 for instance with vm_state deleted and task_state None.
Jan 26 17:27:11 compute-0 nova_compute[185389]: 2026-01-26 17:27:11.386 185393 DEBUG nova.compute.manager [req-f9e4078e-831f-4d6b-8b78-c829d81fb3dd req-f18f921c-3169-4ea4-9dba-b8808e6ad254 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Received event network-vif-deleted-d92e58d6-ae98-4c68-82c7-6b27e1ed65d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:14 compute-0 nova_compute[185389]: 2026-01-26 17:27:14.194 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:14 compute-0 nova_compute[185389]: 2026-01-26 17:27:14.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:15 compute-0 podman[259914]: 2026-01-26 17:27:15.218311075 +0000 UTC m=+0.079874659 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:27:16 compute-0 nova_compute[185389]: 2026-01-26 17:27:16.028 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:17 compute-0 podman[259917]: 2026-01-26 17:27:17.112072949 +0000 UTC m=+1.957341702 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:27:17 compute-0 podman[259916]: 2026-01-26 17:27:17.137543431 +0000 UTC m=+1.984494179 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 26 17:27:17 compute-0 podman[259915]: 2026-01-26 17:27:17.159715123 +0000 UTC m=+2.018879213 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:27:17 compute-0 nova_compute[185389]: 2026-01-26 17:27:17.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:17 compute-0 nova_compute[185389]: 2026-01-26 17:27:17.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:27:18 compute-0 podman[259993]: 2026-01-26 17:27:18.247001368 +0000 UTC m=+0.116416941 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 17:27:18 compute-0 nova_compute[185389]: 2026-01-26 17:27:18.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:19 compute-0 nova_compute[185389]: 2026-01-26 17:27:19.199 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:20 compute-0 nova_compute[185389]: 2026-01-26 17:27:20.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:20 compute-0 nova_compute[185389]: 2026-01-26 17:27:20.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:27:20 compute-0 nova_compute[185389]: 2026-01-26 17:27:20.769 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:27:20 compute-0 nova_compute[185389]: 2026-01-26 17:27:20.770 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:21 compute-0 nova_compute[185389]: 2026-01-26 17:27:21.031 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:22 compute-0 ovn_controller[97699]: 2026-01-26T17:27:22Z|00177|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:27:22 compute-0 nova_compute[185389]: 2026-01-26 17:27:22.442 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:22 compute-0 ovn_controller[97699]: 2026-01-26T17:27:22Z|00178|binding|INFO|Releasing lport 072b84ed-db94-41f8-b8ae-79603b591704 from this chassis (sb_readonly=0)
Jan 26 17:27:22 compute-0 nova_compute[185389]: 2026-01-26 17:27:22.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:22 compute-0 nova_compute[185389]: 2026-01-26 17:27:22.740 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:23 compute-0 podman[260019]: 2026-01-26 17:27:23.242098783 +0000 UTC m=+0.099341907 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, release=1214.1726694543, release-0.7.12=)
Jan 26 17:27:23 compute-0 podman[260018]: 2026-01-26 17:27:23.246505343 +0000 UTC m=+0.100212881 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:27:23 compute-0 podman[260017]: 2026-01-26 17:27:23.29351722 +0000 UTC m=+0.147508596 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 26 17:27:24 compute-0 nova_compute[185389]: 2026-01-26 17:27:24.166 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769448429.1648724, e14bdaa0-ac4b-4c4a-8036-640cb431e8b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:27:24 compute-0 nova_compute[185389]: 2026-01-26 17:27:24.166 185393 INFO nova.compute.manager [-] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] VM Stopped (Lifecycle Event)
Jan 26 17:27:24 compute-0 nova_compute[185389]: 2026-01-26 17:27:24.188 185393 DEBUG nova.compute.manager [None req-4240949c-132b-4daa-bef6-35ae85db31dc - - - - - -] [instance: e14bdaa0-ac4b-4c4a-8036-640cb431e8b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:27:24 compute-0 nova_compute[185389]: 2026-01-26 17:27:24.202 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:24 compute-0 nova_compute[185389]: 2026-01-26 17:27:24.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:26 compute-0 nova_compute[185389]: 2026-01-26 17:27:26.035 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.749 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.750 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.751 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.751 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.840 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.907 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.908 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:28 compute-0 nova_compute[185389]: 2026-01-26 17:27:28.974 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.339 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.340 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=72.30755615234375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.341 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.341 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.427 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.428 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.428 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.491 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.509 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.531 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:27:29 compute-0 nova_compute[185389]: 2026-01-26 17:27:29.532 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:29 compute-0 podman[201244]: time="2026-01-26T17:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:27:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:27:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 26 17:27:31 compute-0 nova_compute[185389]: 2026-01-26 17:27:31.038 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:31 compute-0 openstack_network_exporter[204387]: ERROR   17:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:27:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:27:31 compute-0 openstack_network_exporter[204387]: ERROR   17:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:27:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:27:34 compute-0 nova_compute[185389]: 2026-01-26 17:27:34.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:35 compute-0 nova_compute[185389]: 2026-01-26 17:27:35.527 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:35 compute-0 nova_compute[185389]: 2026-01-26 17:27:35.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:36 compute-0 nova_compute[185389]: 2026-01-26 17:27:36.041 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:36 compute-0 nova_compute[185389]: 2026-01-26 17:27:36.929 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:36 compute-0 nova_compute[185389]: 2026-01-26 17:27:36.929 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:36 compute-0 nova_compute[185389]: 2026-01-26 17:27:36.947 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.025 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.026 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.035 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.035 185393 INFO nova.compute.claims [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Claim successful on node compute-0.ctlplane.example.com
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.212 185393 DEBUG nova.compute.provider_tree [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.238 185393 DEBUG nova.scheduler.client.report [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.268 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.269 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.331 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.332 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.350 185393 INFO nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.378 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.956 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.958 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.959 185393 INFO nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Creating image(s)
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.960 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.961 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.962 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:37 compute-0 nova_compute[185389]: 2026-01-26 17:27:37.982 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.062 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.064 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "ce93f468e93236574b5210325f2425f113a33d3d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.065 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.077 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.098 185393 DEBUG nova.policy [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '237a863555d84bd386855d9cf781beb4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.152 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.153 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d,backing_fmt=raw /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.210 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d,backing_fmt=raw /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.213 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "ce93f468e93236574b5210325f2425f113a33d3d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.215 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.296 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce93f468e93236574b5210325f2425f113a33d3d --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.299 185393 DEBUG nova.virt.disk.api [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Checking if we can resize image /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.300 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.381 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.383 185393 DEBUG nova.virt.disk.api [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Cannot resize image /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.383 185393 DEBUG nova.objects.instance [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'migration_context' on Instance uuid e833646f-b29a-4fe4-b786-4ee23c6f8a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.399 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.400 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Ensure instance console log exists: /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.401 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.402 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:38 compute-0 nova_compute[185389]: 2026-01-26 17:27:38.402 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:39 compute-0 nova_compute[185389]: 2026-01-26 17:27:39.213 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:41 compute-0 nova_compute[185389]: 2026-01-26 17:27:41.044 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:41 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:41.104 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:27:41 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:41.105 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:27:41 compute-0 nova_compute[185389]: 2026-01-26 17:27:41.108 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:41 compute-0 nova_compute[185389]: 2026-01-26 17:27:41.274 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Successfully created port: d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 17:27:42 compute-0 nova_compute[185389]: 2026-01-26 17:27:42.962 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Successfully updated port: d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 17:27:42 compute-0 nova_compute[185389]: 2026-01-26 17:27:42.982 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:27:42 compute-0 nova_compute[185389]: 2026-01-26 17:27:42.982 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:27:42 compute-0 nova_compute[185389]: 2026-01-26 17:27:42.983 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 17:27:43 compute-0 nova_compute[185389]: 2026-01-26 17:27:43.057 185393 DEBUG nova.compute.manager [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-changed-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:43 compute-0 nova_compute[185389]: 2026-01-26 17:27:43.057 185393 DEBUG nova.compute.manager [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Refreshing instance network info cache due to event network-changed-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 17:27:43 compute-0 nova_compute[185389]: 2026-01-26 17:27:43.058 185393 DEBUG oslo_concurrency.lockutils [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:27:43 compute-0 nova_compute[185389]: 2026-01-26 17:27:43.128 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 17:27:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:44.107 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.165 185393 DEBUG nova.network.neutron [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.218 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.578 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.580 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Instance network_info: |[{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.581 185393 DEBUG oslo_concurrency.lockutils [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.582 185393 DEBUG nova.network.neutron [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Refreshing network info cache for port d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.590 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Start _get_guest_xml network_info=[{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:23:56Z,direct_url=<?>,disk_format='qcow2',id=a3153c85-d830-4fd6-8cd6-1a69e6723a9e,min_disk=0,min_ram=0,name='tempest-scenario-img--1989180608',owner='237a863555d84bd386855d9cf781beb4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:23:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_format': None, 'encrypted': False, 'image_id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.599 185393 WARNING nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.606 185393 DEBUG nova.virt.libvirt.host [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.607 185393 DEBUG nova.virt.libvirt.host [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.612 185393 DEBUG nova.virt.libvirt.host [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.613 185393 DEBUG nova.virt.libvirt.host [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.613 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.614 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T17:20:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8d013773-e8ea-4b83-a8e3-f58d9749637f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T17:23:56Z,direct_url=<?>,disk_format='qcow2',id=a3153c85-d830-4fd6-8cd6-1a69e6723a9e,min_disk=0,min_ram=0,name='tempest-scenario-img--1989180608',owner='237a863555d84bd386855d9cf781beb4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T17:23:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.615 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.615 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.616 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.616 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.617 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.617 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.618 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.618 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.619 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.619 185393 DEBUG nova.virt.hardware [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.626 185393 DEBUG nova.virt.libvirt.vif [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:27:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',id=16,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-mdspyvmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:27:37Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=e833646f-b29a-4fe4-b786-4ee23c6f8a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.627 185393 DEBUG nova.network.os_vif_util [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.628 185393 DEBUG nova.network.os_vif_util [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.629 185393 DEBUG nova.objects.instance [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'pci_devices' on Instance uuid e833646f-b29a-4fe4-b786-4ee23c6f8a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.706 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] End _get_guest_xml xml=<domain type="kvm">
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <uuid>e833646f-b29a-4fe4-b786-4ee23c6f8a82</uuid>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <name>instance-00000010</name>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <memory>131072</memory>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <vcpu>1</vcpu>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <metadata>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:name>te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm</nova:name>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:creationTime>2026-01-26 17:27:44</nova:creationTime>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:flavor name="m1.nano">
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:memory>128</nova:memory>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:disk>1</nova:disk>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:swap>0</nova:swap>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:vcpus>1</nova:vcpus>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       </nova:flavor>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:owner>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:user uuid="5ca35c18e54b493f9efdfe2218cce3c7">tempest-PrometheusGabbiTest-2035201521-project-member</nova:user>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:project uuid="237a863555d84bd386855d9cf781beb4">tempest-PrometheusGabbiTest-2035201521</nova:project>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       </nova:owner>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:root type="image" uuid="a3153c85-d830-4fd6-8cd6-1a69e6723a9e"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <nova:ports>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         <nova:port uuid="d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7">
Jan 26 17:27:44 compute-0 nova_compute[185389]:           <nova:ip type="fixed" address="10.100.0.222" ipVersion="4"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:         </nova:port>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       </nova:ports>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </nova:instance>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </metadata>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <sysinfo type="smbios">
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <system>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="manufacturer">RDO</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="product">OpenStack Compute</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="serial">e833646f-b29a-4fe4-b786-4ee23c6f8a82</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="uuid">e833646f-b29a-4fe4-b786-4ee23c6f8a82</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <entry name="family">Virtual Machine</entry>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </system>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </sysinfo>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <os>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <boot dev="hd"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <smbios mode="sysinfo"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </os>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <features>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <acpi/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <apic/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <vmcoreinfo/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </features>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <clock offset="utc">
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <timer name="hpet" present="no"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </clock>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <cpu mode="host-model" match="exact">
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </cpu>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   <devices>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <disk type="file" device="disk">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <target dev="vda" bus="virtio"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <disk type="file" device="cdrom">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <driver name="qemu" type="raw" cache="none"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <source file="/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.config"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <target dev="sda" bus="sata"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </disk>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <interface type="ethernet">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <mac address="fa:16:3e:80:a8:b1"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <mtu size="1442"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <target dev="tapd4acf2b5-65"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </interface>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <serial type="pty">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <log file="/var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/console.log" append="off"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </serial>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <video>
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <model type="virtio"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </video>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <input type="tablet" bus="usb"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <rng model="virtio">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <backend model="random">/dev/urandom</backend>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </rng>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <controller type="usb" index="0"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     <memballoon model="virtio">
Jan 26 17:27:44 compute-0 nova_compute[185389]:       <stats period="10"/>
Jan 26 17:27:44 compute-0 nova_compute[185389]:     </memballoon>
Jan 26 17:27:44 compute-0 nova_compute[185389]:   </devices>
Jan 26 17:27:44 compute-0 nova_compute[185389]: </domain>
Jan 26 17:27:44 compute-0 nova_compute[185389]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.713 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Preparing to wait for external event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.714 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.714 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.714 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.715 185393 DEBUG nova.virt.libvirt.vif [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T17:27:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',id=16,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-mdspyvmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T17:27:37Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=e833646f-b29a-4fe4-b786-4ee23c6f8a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.715 185393 DEBUG nova.network.os_vif_util [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.715 185393 DEBUG nova.network.os_vif_util [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.716 185393 DEBUG os_vif [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.716 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.717 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.717 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.721 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.722 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4acf2b5-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.722 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4acf2b5-65, col_values=(('external_ids', {'iface-id': 'd4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:a8:b1', 'vm-uuid': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:44 compute-0 NetworkManager[56253]: <info>  [1769448464.7254] manager: (tapd4acf2b5-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.727 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.734 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.735 185393 INFO os_vif [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65')
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.855 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.855 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.855 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] No VIF found with MAC fa:16:3e:80:a8:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 17:27:44 compute-0 nova_compute[185389]: 2026-01-26 17:27:44.856 185393 INFO nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Using config drive
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.232 185393 INFO nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Creating config drive at /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.config
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.240 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1t13c90a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.369 185393 DEBUG oslo_concurrency.processutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1t13c90a" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:45 compute-0 NetworkManager[56253]: <info>  [1769448465.4608] manager: (tapd4acf2b5-65): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Jan 26 17:27:45 compute-0 kernel: tapd4acf2b5-65: entered promiscuous mode
Jan 26 17:27:45 compute-0 ovn_controller[97699]: 2026-01-26T17:27:45Z|00179|binding|INFO|Claiming lport d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 for this chassis.
Jan 26 17:27:45 compute-0 ovn_controller[97699]: 2026-01-26T17:27:45Z|00180|binding|INFO|d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7: Claiming fa:16:3e:80:a8:b1 10.100.0.222
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.470 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.481 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a8:b1 10.100.0.222'], port_security=['fa:16:3e:80:a8:b1 10.100.0.222'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.222/16', 'neutron:device_id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '237a863555d84bd386855d9cf781beb4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc68cb5f-1d27-40d0-8734-5af9ebb54c8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a60e9a2c-a4db-4b50-8dd7-bdfa9e915edf, chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.482 106955 INFO neutron.agent.ovn.metadata.agent [-] Port d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 in datapath ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f bound to our chassis
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.484 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f
Jan 26 17:27:45 compute-0 ovn_controller[97699]: 2026-01-26T17:27:45Z|00181|binding|INFO|Setting lport d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 ovn-installed in OVS
Jan 26 17:27:45 compute-0 ovn_controller[97699]: 2026-01-26T17:27:45Z|00182|binding|INFO|Setting lport d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 up in Southbound
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.492 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.499 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.505 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a21d7d-99c2-4b15-be95-8b800d57f98c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 systemd-udevd[260130]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.542 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[dee42ae7-5b10-4895-badf-335ed7b1219a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.546 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[9a85471e-e2e6-4297-97ae-e865b3b9bd01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 systemd-machined[156679]: New machine qemu-17-instance-00000010.
Jan 26 17:27:45 compute-0 NetworkManager[56253]: <info>  [1769448465.5518] device (tapd4acf2b5-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 17:27:45 compute-0 NetworkManager[56253]: <info>  [1769448465.5567] device (tapd4acf2b5-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 17:27:45 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Jan 26 17:27:45 compute-0 podman[260112]: 2026-01-26 17:27:45.578832412 +0000 UTC m=+0.127742408 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.584 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc456a-de28-481a-9f5f-2413c3f2263f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.609 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[92893138-90bc-4868-a1da-1ed366510b80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad47c1ee-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:d4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691054, 'reachable_time': 16319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260150, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.629 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[dc55ae21-670a-47b5-9960-34d4ad08285a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad47c1ee-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691072, 'tstamp': 691072}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260155, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapad47c1ee-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691075, 'tstamp': 691075}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260155, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.632 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad47c1ee-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.634 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.636 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.637 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad47c1ee-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.638 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.638 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad47c1ee-d0, col_values=(('external_ids', {'iface-id': '072b84ed-db94-41f8-b8ae-79603b591704'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:27:45 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:27:45.639 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.812 185393 DEBUG nova.compute.manager [req-97d4320f-966d-4beb-a70f-4e5a21f9ab42 req-faf754ce-2c2d-4dbd-9bb9-4dab2b8cd197 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.813 185393 DEBUG oslo_concurrency.lockutils [req-97d4320f-966d-4beb-a70f-4e5a21f9ab42 req-faf754ce-2c2d-4dbd-9bb9-4dab2b8cd197 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.813 185393 DEBUG oslo_concurrency.lockutils [req-97d4320f-966d-4beb-a70f-4e5a21f9ab42 req-faf754ce-2c2d-4dbd-9bb9-4dab2b8cd197 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.819 185393 DEBUG oslo_concurrency.lockutils [req-97d4320f-966d-4beb-a70f-4e5a21f9ab42 req-faf754ce-2c2d-4dbd-9bb9-4dab2b8cd197 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.820 185393 DEBUG nova.compute.manager [req-97d4320f-966d-4beb-a70f-4e5a21f9ab42 req-faf754ce-2c2d-4dbd-9bb9-4dab2b8cd197 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Processing event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.991 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448465.991337, e833646f-b29a-4fe4-b786-4ee23c6f8a82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.992 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] VM Started (Lifecycle Event)
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.994 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 17:27:45 compute-0 nova_compute[185389]: 2026-01-26 17:27:45.998 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.003 185393 INFO nova.virt.libvirt.driver [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Instance spawned successfully.
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.003 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.015 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.023 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.029 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.030 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.030 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.030 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.031 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.031 185393 DEBUG nova.virt.libvirt.driver [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.047 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.060 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.060 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448465.9914804, e833646f-b29a-4fe4-b786-4ee23c6f8a82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.060 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] VM Paused (Lifecycle Event)
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.123 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.128 185393 DEBUG nova.virt.driver [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] Emitting event <LifecycleEvent: 1769448465.998114, e833646f-b29a-4fe4-b786-4ee23c6f8a82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.128 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] VM Resumed (Lifecycle Event)
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.165 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.166 185393 INFO nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Took 8.21 seconds to spawn the instance on the hypervisor.
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.166 185393 DEBUG nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.172 185393 DEBUG nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.226 185393 INFO nova.compute.manager [None req-e50f0e98-eebf-4af3-82c4-2db56edd41fb - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.251 185393 INFO nova.compute.manager [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Took 9.25 seconds to build instance.
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.269 185393 DEBUG oslo_concurrency.lockutils [None req-104619af-0e79-4e84-8278-fc17af04be7e 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.488 185393 DEBUG nova.network.neutron [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated VIF entry in instance network info cache for port d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.489 185393 DEBUG nova.network.neutron [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:27:46 compute-0 nova_compute[185389]: 2026-01-26 17:27:46.506 185393 DEBUG oslo_concurrency.lockutils [req-bad2c861-5c15-465c-8cba-9dfd9de08a09 req-7a7273ae-10b8-471e-ba49-c5ed5a509cc4 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:27:48 compute-0 podman[260167]: 2026-01-26 17:27:48.204022998 +0000 UTC m=+0.089668152 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:27:48 compute-0 podman[260168]: 2026-01-26 17:27:48.21626291 +0000 UTC m=+0.101065381 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:27:48 compute-0 podman[260166]: 2026-01-26 17:27:48.22542164 +0000 UTC m=+0.112202865 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.240 185393 DEBUG nova.compute.manager [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.240 185393 DEBUG oslo_concurrency.lockutils [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.241 185393 DEBUG oslo_concurrency.lockutils [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.241 185393 DEBUG oslo_concurrency.lockutils [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.241 185393 DEBUG nova.compute.manager [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] No waiting events found dispatching network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:27:48 compute-0 nova_compute[185389]: 2026-01-26 17:27:48.241 185393 WARNING nova.compute.manager [req-bc8adeda-c6a8-4347-bef9-729f23e60c9e req-517a62d0-5b94-45e9-93d6-632db400b051 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received unexpected event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 for instance with vm_state active and task_state None.
Jan 26 17:27:49 compute-0 podman[260227]: 2026-01-26 17:27:49.192121596 +0000 UTC m=+0.068907837 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 17:27:49 compute-0 nova_compute[185389]: 2026-01-26 17:27:49.726 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:51 compute-0 nova_compute[185389]: 2026-01-26 17:27:51.050 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:54 compute-0 podman[260247]: 2026-01-26 17:27:54.220414414 +0000 UTC m=+0.101360450 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 26 17:27:54 compute-0 podman[260251]: 2026-01-26 17:27:54.256166516 +0000 UTC m=+0.128627402 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=kepler, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Jan 26 17:27:54 compute-0 podman[260246]: 2026-01-26 17:27:54.275513982 +0000 UTC m=+0.164249620 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:27:54 compute-0 nova_compute[185389]: 2026-01-26 17:27:54.728 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:56 compute-0 nova_compute[185389]: 2026-01-26 17:27:56.052 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:59 compute-0 nova_compute[185389]: 2026-01-26 17:27:59.733 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:59 compute-0 podman[201244]: time="2026-01-26T17:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:27:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:27:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 26 17:28:01 compute-0 nova_compute[185389]: 2026-01-26 17:28:01.055 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:01 compute-0 openstack_network_exporter[204387]: ERROR   17:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:28:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:28:01 compute-0 openstack_network_exporter[204387]: ERROR   17:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:28:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:28:01.785 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:28:01.786 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:28:01.787 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:04 compute-0 nova_compute[185389]: 2026-01-26 17:28:04.738 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:06 compute-0 nova_compute[185389]: 2026-01-26 17:28:06.058 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:09 compute-0 nova_compute[185389]: 2026-01-26 17:28:09.744 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:11 compute-0 nova_compute[185389]: 2026-01-26 17:28:11.061 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:14 compute-0 nova_compute[185389]: 2026-01-26 17:28:14.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:14 compute-0 nova_compute[185389]: 2026-01-26 17:28:14.748 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:15 compute-0 ovn_controller[97699]: 2026-01-26T17:28:15Z|00183|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 26 17:28:16 compute-0 nova_compute[185389]: 2026-01-26 17:28:16.064 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:16 compute-0 podman[260305]: 2026-01-26 17:28:16.221989002 +0000 UTC m=+0.101564144 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:28:18 compute-0 nova_compute[185389]: 2026-01-26 17:28:18.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:18 compute-0 nova_compute[185389]: 2026-01-26 17:28:18.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:28:19 compute-0 podman[260332]: 2026-01-26 17:28:19.203591957 +0000 UTC m=+0.073950344 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:28:19 compute-0 podman[260330]: 2026-01-26 17:28:19.223246712 +0000 UTC m=+0.106370886 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 26 17:28:19 compute-0 podman[260331]: 2026-01-26 17:28:19.229129092 +0000 UTC m=+0.103506958 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:28:19 compute-0 podman[260393]: 2026-01-26 17:28:19.324156798 +0000 UTC m=+0.092307543 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:28:19 compute-0 nova_compute[185389]: 2026-01-26 17:28:19.753 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:20 compute-0 nova_compute[185389]: 2026-01-26 17:28:20.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:20 compute-0 nova_compute[185389]: 2026-01-26 17:28:20.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:28:20 compute-0 nova_compute[185389]: 2026-01-26 17:28:20.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:28:21 compute-0 nova_compute[185389]: 2026-01-26 17:28:21.024 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:28:21 compute-0 nova_compute[185389]: 2026-01-26 17:28:21.026 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:28:21 compute-0 nova_compute[185389]: 2026-01-26 17:28:21.027 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:28:21 compute-0 nova_compute[185389]: 2026-01-26 17:28:21.027 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:28:21 compute-0 nova_compute[185389]: 2026-01-26 17:28:21.068 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:21 compute-0 ovn_controller[97699]: 2026-01-26T17:28:21Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:a8:b1 10.100.0.222
Jan 26 17:28:21 compute-0 ovn_controller[97699]: 2026-01-26T17:28:21Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:a8:b1 10.100.0.222
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.239 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.258 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.260 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.262 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.263 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:23 compute-0 nova_compute[185389]: 2026-01-26 17:28:23.265 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:24 compute-0 nova_compute[185389]: 2026-01-26 17:28:24.756 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:25 compute-0 podman[260424]: 2026-01-26 17:28:25.224757872 +0000 UTC m=+0.103369463 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler)
Jan 26 17:28:25 compute-0 podman[260423]: 2026-01-26 17:28:25.225977145 +0000 UTC m=+0.109435869 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Jan 26 17:28:25 compute-0 podman[260422]: 2026-01-26 17:28:25.229506452 +0000 UTC m=+0.113442259 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.068 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.609 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.782 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.783 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Triggering sync for uuid e833646f-b29a-4fe4-b786-4ee23c6f8a82 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.786 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.848 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:26 compute-0 nova_compute[185389]: 2026-01-26 17:28:26.850 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:29 compute-0 podman[201244]: time="2026-01-26T17:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:28:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.765 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.775 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.776 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.776 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.777 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:28:29 compute-0 nova_compute[185389]: 2026-01-26 17:28:29.921 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.014 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.024 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.096 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.105 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.174 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.175 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.239 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.620 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.622 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4914MB free_disk=72.27883911132812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.622 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.623 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.705 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.705 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.706 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.706 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.723 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.745 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.746 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.759 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.794 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.867 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.910 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.940 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.941 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.942 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:30 compute-0 nova_compute[185389]: 2026-01-26 17:28:30.942 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:28:31 compute-0 nova_compute[185389]: 2026-01-26 17:28:31.071 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.359 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.359 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.359 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.361 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.365 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 26 17:28:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:31.367 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e833646f-b29a-4fe4-b786-4ee23c6f8a82 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f609241ecdf9402bd0546eda97196742cf90b225f1ce4eb867c55aad4d129116" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 26 17:28:31 compute-0 openstack_network_exporter[204387]: ERROR   17:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:28:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:28:31 compute-0 openstack_network_exporter[204387]: ERROR   17:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:28:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:28:31 compute-0 nova_compute[185389]: 2026-01-26 17:28:31.736 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:31 compute-0 nova_compute[185389]: 2026-01-26 17:28:31.750 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:28:31 compute-0 nova_compute[185389]: 2026-01-26 17:28:31.783 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.344 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 26 Jan 2026 17:28:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ac7d632d-4733-4237-9dd6-4a524a80984c x-openstack-request-id: req-ac7d632d-4733-4237-9dd6-4a524a80984c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.344 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e833646f-b29a-4fe4-b786-4ee23c6f8a82", "name": "te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm", "status": "ACTIVE", "tenant_id": "237a863555d84bd386855d9cf781beb4", "user_id": "5ca35c18e54b493f9efdfe2218cce3c7", "metadata": {"metering.server_group": "21873820-28a9-4731-9256-efbf2eb46b4d"}, "hostId": "d53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42", "image": {"id": "a3153c85-d830-4fd6-8cd6-1a69e6723a9e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a3153c85-d830-4fd6-8cd6-1a69e6723a9e"}]}, "flavor": {"id": "8d013773-e8ea-4b83-a8e3-f58d9749637f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8d013773-e8ea-4b83-a8e3-f58d9749637f"}]}, "created": "2026-01-26T17:27:35Z", "updated": "2026-01-26T17:27:46Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.222", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:80:a8:b1"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e833646f-b29a-4fe4-b786-4ee23c6f8a82"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e833646f-b29a-4fe4-b786-4ee23c6f8a82"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-26T17:27:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000010", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.344 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e833646f-b29a-4fe4-b786-4ee23c6f8a82 used request id req-ac7d632d-4733-4237-9dd6-4a524a80984c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.346 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'name': 'te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.349 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.349 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:28:32.350229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.398 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 72777728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.399 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.437 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.437 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.439 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.440 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 6580893830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.440 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.440 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 16986439843 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:28:32.439731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.441 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.442 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.443 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.443 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:28:32.442361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:28:32.444494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.447 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e833646f-b29a-4fe4-b786-4ee23c6f8a82 / tapd4acf2b5-65 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.448 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.451 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.452 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>]
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-26T17:28:32.452428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:28:32.453493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.475 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/cpu volume: 44360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.496 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 234430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.500 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:28:32.499520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.500 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.502 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.503 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 rsyslogd[235842]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:28:32.501605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:28:32.502722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.507 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.507 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:28:32.504231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.508 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:28:32.505755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>]
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:28:32.506830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.511 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.511 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:28:32.508275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.512 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/memory.usage volume: 46.66796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 43.37890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-26T17:28:32.509872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:28:32.511032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:28:32.512574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:28:32.514136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.514 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.515 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.517 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.517 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:28:32.515609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:28:32.517046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:28:32.518273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.542 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.543 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.557 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.557 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.559 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.559 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.559 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:28:32.559051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.560 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.561 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 29129728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:28:32.561237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.562 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.562 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 407670116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 56361248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.563 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 440552323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.564 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 54239181 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.564 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.565 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.565 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.566 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.568 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.569 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.569 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.570 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.571 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.571 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.571 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.572 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.572 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.572 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.572 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:28:32.563257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:28:32.565253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:28:32.566513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:32 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:28:32.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:28:32.568405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:28:34 compute-0 nova_compute[185389]: 2026-01-26 17:28:34.770 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:35 compute-0 nova_compute[185389]: 2026-01-26 17:28:35.762 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:36 compute-0 nova_compute[185389]: 2026-01-26 17:28:36.074 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:36 compute-0 nova_compute[185389]: 2026-01-26 17:28:36.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:39 compute-0 nova_compute[185389]: 2026-01-26 17:28:39.774 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:41 compute-0 nova_compute[185389]: 2026-01-26 17:28:41.077 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:44 compute-0 nova_compute[185389]: 2026-01-26 17:28:44.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:44 compute-0 nova_compute[185389]: 2026-01-26 17:28:44.785 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:46 compute-0 nova_compute[185389]: 2026-01-26 17:28:46.081 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:47 compute-0 podman[260495]: 2026-01-26 17:28:47.228593163 +0000 UTC m=+0.086676299 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:28:47 compute-0 nova_compute[185389]: 2026-01-26 17:28:47.640 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:49 compute-0 nova_compute[185389]: 2026-01-26 17:28:49.788 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:50 compute-0 podman[260522]: 2026-01-26 17:28:50.201593893 +0000 UTC m=+0.078691972 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 26 17:28:50 compute-0 podman[260525]: 2026-01-26 17:28:50.202182279 +0000 UTC m=+0.079157564 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:28:50 compute-0 podman[260520]: 2026-01-26 17:28:50.224650811 +0000 UTC m=+0.114320802 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, vcs-type=git)
Jan 26 17:28:50 compute-0 podman[260521]: 2026-01-26 17:28:50.228223958 +0000 UTC m=+0.110216680 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:28:51 compute-0 nova_compute[185389]: 2026-01-26 17:28:51.083 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:54 compute-0 nova_compute[185389]: 2026-01-26 17:28:54.793 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:56 compute-0 nova_compute[185389]: 2026-01-26 17:28:56.085 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:56 compute-0 podman[260593]: 2026-01-26 17:28:56.226325506 +0000 UTC m=+0.102875030 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 17:28:56 compute-0 podman[260594]: 2026-01-26 17:28:56.239145705 +0000 UTC m=+0.115088873 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Jan 26 17:28:56 compute-0 podman[260592]: 2026-01-26 17:28:56.260094545 +0000 UTC m=+0.147724831 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:28:59 compute-0 podman[201244]: time="2026-01-26T17:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:28:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:28:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 26 17:28:59 compute-0 nova_compute[185389]: 2026-01-26 17:28:59.796 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:01 compute-0 nova_compute[185389]: 2026-01-26 17:29:01.089 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:01 compute-0 openstack_network_exporter[204387]: ERROR   17:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:29:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:29:01 compute-0 openstack_network_exporter[204387]: ERROR   17:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:29:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:29:01.787 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:29:01.788 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:29:01.788 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:29:04 compute-0 nova_compute[185389]: 2026-01-26 17:29:04.799 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:05 compute-0 nova_compute[185389]: 2026-01-26 17:29:05.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:06 compute-0 nova_compute[185389]: 2026-01-26 17:29:06.091 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:07 compute-0 podman[201244]: time="2026-01-26T17:29:07Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:29:07 compute-0 podman[201244]: @ - - [26/Jan/2026:17:29:07 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 28896 "" "Go-http-client/1.1"
Jan 26 17:29:09 compute-0 nova_compute[185389]: 2026-01-26 17:29:09.803 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:11 compute-0 nova_compute[185389]: 2026-01-26 17:29:11.095 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:14 compute-0 nova_compute[185389]: 2026-01-26 17:29:14.806 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:16 compute-0 nova_compute[185389]: 2026-01-26 17:29:16.097 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:16 compute-0 nova_compute[185389]: 2026-01-26 17:29:16.733 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:18 compute-0 podman[260657]: 2026-01-26 17:29:18.198808733 +0000 UTC m=+0.087524032 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:29:19 compute-0 nova_compute[185389]: 2026-01-26 17:29:19.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:19 compute-0 nova_compute[185389]: 2026-01-26 17:29:19.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:29:19 compute-0 nova_compute[185389]: 2026-01-26 17:29:19.809 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:20 compute-0 nova_compute[185389]: 2026-01-26 17:29:20.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:21 compute-0 nova_compute[185389]: 2026-01-26 17:29:21.102 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:21 compute-0 podman[260683]: 2026-01-26 17:29:21.197755579 +0000 UTC m=+0.080412478 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 17:29:21 compute-0 podman[260681]: 2026-01-26 17:29:21.199191689 +0000 UTC m=+0.089200269 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Jan 26 17:29:21 compute-0 podman[260682]: 2026-01-26 17:29:21.234978483 +0000 UTC m=+0.121508848 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20260120)
Jan 26 17:29:21 compute-0 podman[260684]: 2026-01-26 17:29:21.237589834 +0000 UTC m=+0.115483404 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:29:22 compute-0 nova_compute[185389]: 2026-01-26 17:29:22.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:22 compute-0 nova_compute[185389]: 2026-01-26 17:29:22.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:29:23 compute-0 nova_compute[185389]: 2026-01-26 17:29:23.200 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:29:23 compute-0 nova_compute[185389]: 2026-01-26 17:29:23.201 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:29:23 compute-0 nova_compute[185389]: 2026-01-26 17:29:23.202 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.642 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.666 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.667 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.668 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.669 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:24 compute-0 nova_compute[185389]: 2026-01-26 17:29:24.814 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:26 compute-0 nova_compute[185389]: 2026-01-26 17:29:26.112 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:27 compute-0 podman[260770]: 2026-01-26 17:29:27.21320732 +0000 UTC m=+0.096076815 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:29:27 compute-0 podman[260771]: 2026-01-26 17:29:27.239154056 +0000 UTC m=+0.115345670 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:29:27 compute-0 podman[260769]: 2026-01-26 17:29:27.252173471 +0000 UTC m=+0.135408387 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:29 compute-0 podman[201244]: time="2026-01-26T17:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:29:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:29:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 26 17:29:29 compute-0 nova_compute[185389]: 2026-01-26 17:29:29.819 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.116 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:31 compute-0 openstack_network_exporter[204387]: ERROR   17:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:29:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:29:31 compute-0 openstack_network_exporter[204387]: ERROR   17:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:29:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.932 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.934 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.934 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:29:31 compute-0 nova_compute[185389]: 2026-01-26 17:29:31.935 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.178 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.262 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.264 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.335 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.346 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.417 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.419 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.496 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.880 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.882 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4879MB free_disk=72.27883911132812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.883 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.884 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.965 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.966 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.966 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:29:32 compute-0 nova_compute[185389]: 2026-01-26 17:29:32.967 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:29:33 compute-0 nova_compute[185389]: 2026-01-26 17:29:33.121 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:29:33 compute-0 nova_compute[185389]: 2026-01-26 17:29:33.320 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:29:33 compute-0 nova_compute[185389]: 2026-01-26 17:29:33.323 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:29:33 compute-0 nova_compute[185389]: 2026-01-26 17:29:33.323 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:29:34 compute-0 nova_compute[185389]: 2026-01-26 17:29:34.824 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:36 compute-0 nova_compute[185389]: 2026-01-26 17:29:36.118 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:37 compute-0 nova_compute[185389]: 2026-01-26 17:29:37.319 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:38 compute-0 nova_compute[185389]: 2026-01-26 17:29:38.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:39 compute-0 nova_compute[185389]: 2026-01-26 17:29:39.833 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:41 compute-0 nova_compute[185389]: 2026-01-26 17:29:41.122 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:44 compute-0 nova_compute[185389]: 2026-01-26 17:29:44.840 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:46 compute-0 nova_compute[185389]: 2026-01-26 17:29:46.124 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 17:29:49 compute-0 podman[260841]: 2026-01-26 17:29:49.219695693 +0000 UTC m=+0.093635758 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:29:49 compute-0 nova_compute[185389]: 2026-01-26 17:29:49.845 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:51 compute-0 nova_compute[185389]: 2026-01-26 17:29:51.127 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:52 compute-0 podman[260867]: 2026-01-26 17:29:52.194486272 +0000 UTC m=+0.080178422 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:29:52 compute-0 podman[260868]: 2026-01-26 17:29:52.199375536 +0000 UTC m=+0.082680221 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:52 compute-0 podman[260866]: 2026-01-26 17:29:52.20137859 +0000 UTC m=+0.093460104 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=)
Jan 26 17:29:52 compute-0 podman[260869]: 2026-01-26 17:29:52.212304198 +0000 UTC m=+0.089649541 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:29:54 compute-0 nova_compute[185389]: 2026-01-26 17:29:54.850 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:56 compute-0 nova_compute[185389]: 2026-01-26 17:29:56.129 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:58 compute-0 podman[260946]: 2026-01-26 17:29:58.205916363 +0000 UTC m=+0.085631470 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:29:58 compute-0 podman[260947]: 2026-01-26 17:29:58.221670522 +0000 UTC m=+0.096999531 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, release=1214.1726694543)
Jan 26 17:29:58 compute-0 podman[260945]: 2026-01-26 17:29:58.230000009 +0000 UTC m=+0.112777501 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:59 compute-0 podman[201244]: time="2026-01-26T17:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:29:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:29:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 26 17:29:59 compute-0 nova_compute[185389]: 2026-01-26 17:29:59.854 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:01 compute-0 nova_compute[185389]: 2026-01-26 17:30:01.132 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:01 compute-0 openstack_network_exporter[204387]: ERROR   17:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:30:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:30:01 compute-0 openstack_network_exporter[204387]: ERROR   17:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:30:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:30:01.789 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:30:01.790 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:30:01.790 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:04 compute-0 nova_compute[185389]: 2026-01-26 17:30:04.858 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:06 compute-0 nova_compute[185389]: 2026-01-26 17:30:06.135 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:09 compute-0 nova_compute[185389]: 2026-01-26 17:30:09.862 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:11 compute-0 nova_compute[185389]: 2026-01-26 17:30:11.138 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:14 compute-0 nova_compute[185389]: 2026-01-26 17:30:14.867 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:16 compute-0 nova_compute[185389]: 2026-01-26 17:30:16.140 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:18 compute-0 nova_compute[185389]: 2026-01-26 17:30:18.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:19 compute-0 nova_compute[185389]: 2026-01-26 17:30:19.872 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:20 compute-0 podman[261009]: 2026-01-26 17:30:20.198444926 +0000 UTC m=+0.078000994 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:30:20 compute-0 nova_compute[185389]: 2026-01-26 17:30:20.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:20 compute-0 nova_compute[185389]: 2026-01-26 17:30:20.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:20 compute-0 nova_compute[185389]: 2026-01-26 17:30:20.723 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:30:21 compute-0 nova_compute[185389]: 2026-01-26 17:30:21.142 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:22 compute-0 nova_compute[185389]: 2026-01-26 17:30:22.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:23 compute-0 podman[261034]: 2026-01-26 17:30:23.266248446 +0000 UTC m=+0.128862038 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Jan 26 17:30:23 compute-0 podman[261035]: 2026-01-26 17:30:23.27303342 +0000 UTC m=+0.137345388 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4)
Jan 26 17:30:23 compute-0 podman[261037]: 2026-01-26 17:30:23.285299685 +0000 UTC m=+0.137718760 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:30:23 compute-0 podman[261036]: 2026-01-26 17:30:23.29323471 +0000 UTC m=+0.142114958 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 26 17:30:23 compute-0 nova_compute[185389]: 2026-01-26 17:30:23.724 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:23 compute-0 nova_compute[185389]: 2026-01-26 17:30:23.724 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:30:23 compute-0 nova_compute[185389]: 2026-01-26 17:30:23.724 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:30:24 compute-0 nova_compute[185389]: 2026-01-26 17:30:24.042 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:30:24 compute-0 nova_compute[185389]: 2026-01-26 17:30:24.043 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:30:24 compute-0 nova_compute[185389]: 2026-01-26 17:30:24.043 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:30:24 compute-0 nova_compute[185389]: 2026-01-26 17:30:24.043 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:30:24 compute-0 nova_compute[185389]: 2026-01-26 17:30:24.877 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:25 compute-0 nova_compute[185389]: 2026-01-26 17:30:25.251 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:30:25 compute-0 nova_compute[185389]: 2026-01-26 17:30:25.275 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:30:25 compute-0 nova_compute[185389]: 2026-01-26 17:30:25.276 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:30:25 compute-0 nova_compute[185389]: 2026-01-26 17:30:25.277 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:26 compute-0 nova_compute[185389]: 2026-01-26 17:30:26.147 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:29 compute-0 podman[261111]: 2026-01-26 17:30:29.253603301 +0000 UTC m=+0.127309805 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, distribution-scope=public, io.openshift.expose-services=, version=9.4)
Jan 26 17:30:29 compute-0 podman[261109]: 2026-01-26 17:30:29.259295346 +0000 UTC m=+0.140968807 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:30:29 compute-0 podman[261110]: 2026-01-26 17:30:29.270192823 +0000 UTC m=+0.149012836 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:30:29 compute-0 podman[201244]: time="2026-01-26T17:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:30:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:30:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:30:29 compute-0 nova_compute[185389]: 2026-01-26 17:30:29.883 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.150 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.361 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.362 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.369 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.369 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'name': 'te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.370 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.371 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.371 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.372 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.372 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.372 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.374 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.374 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.374 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:30:31.375999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.375 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.write.bytes': [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.377 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.write.bytes': [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm>, <NovaLikeServer: te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.413 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.414 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 openstack_network_exporter[204387]: ERROR   17:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:30:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:30:31 compute-0 openstack_network_exporter[204387]: ERROR   17:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:30:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.461 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.462 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.464 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 6607735993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.464 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:30:31.463478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.464 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 16991195692 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.464 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.465 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.466 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.466 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:30:31.465603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:30:31.467610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.471 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.475 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:30:31.476878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.500 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/cpu volume: 162990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.521 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 335180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.522 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.522 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:30:31.522124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:30:31.523652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.525 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.525 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.526 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.526 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:30:31.524753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:30:31.526169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:30:31.527886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.529 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:30:31.529354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.531 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.531 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:30:31.530992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.532 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:30:31.532621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/memory.usage volume: 43.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 42.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:30:31.533920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.535 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.535 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.536 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.537 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:30:31.535338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.537 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:30:31.536596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:30:31.538290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:30:31.540026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.554 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.554 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.567 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.570 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.570 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.571 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.571 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.573 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.573 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.574 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 30358016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.574 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 407670116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:30:31.569810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.577 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 56361248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.577 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 464695240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.578 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 61571959 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:30:31.573269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.580 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:30:31.576616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.580 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:30:31.580104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.582 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.583 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.583 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.584 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.586 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.586 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.586 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.587 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:30:31.582553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:30:31.586221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:30:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.756 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.757 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.854 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.926 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.928 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.991 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:31 compute-0 nova_compute[185389]: 2026-01-26 17:30:31.998 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.063 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.064 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.131 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.467 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.468 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=72.27878189086914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.469 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.469 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.625 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.625 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.626 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.626 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.793 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.810 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.811 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:30:32 compute-0 nova_compute[185389]: 2026-01-26 17:30:32.812 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:34 compute-0 nova_compute[185389]: 2026-01-26 17:30:34.887 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:36 compute-0 nova_compute[185389]: 2026-01-26 17:30:36.154 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:38 compute-0 nova_compute[185389]: 2026-01-26 17:30:38.807 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:39 compute-0 nova_compute[185389]: 2026-01-26 17:30:39.891 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:40 compute-0 nova_compute[185389]: 2026-01-26 17:30:40.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:41 compute-0 nova_compute[185389]: 2026-01-26 17:30:41.159 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:44 compute-0 nova_compute[185389]: 2026-01-26 17:30:44.893 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:45 compute-0 nova_compute[185389]: 2026-01-26 17:30:45.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:46 compute-0 nova_compute[185389]: 2026-01-26 17:30:46.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:49 compute-0 nova_compute[185389]: 2026-01-26 17:30:49.896 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:51 compute-0 nova_compute[185389]: 2026-01-26 17:30:51.163 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:51 compute-0 podman[261199]: 2026-01-26 17:30:51.173215 +0000 UTC m=+0.064613459 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:30:54 compute-0 podman[261222]: 2026-01-26 17:30:54.206550591 +0000 UTC m=+0.083408790 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:30:54 compute-0 podman[261221]: 2026-01-26 17:30:54.217343405 +0000 UTC m=+0.102670514 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=)
Jan 26 17:30:54 compute-0 podman[261223]: 2026-01-26 17:30:54.220391288 +0000 UTC m=+0.097737200 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:30:54 compute-0 podman[261224]: 2026-01-26 17:30:54.229542798 +0000 UTC m=+0.096058926 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:30:54 compute-0 nova_compute[185389]: 2026-01-26 17:30:54.899 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:56 compute-0 nova_compute[185389]: 2026-01-26 17:30:56.163 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:59 compute-0 podman[201244]: time="2026-01-26T17:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:30:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:30:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 26 17:30:59 compute-0 nova_compute[185389]: 2026-01-26 17:30:59.906 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:00 compute-0 podman[261296]: 2026-01-26 17:31:00.199722288 +0000 UTC m=+0.079099964 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:31:00 compute-0 podman[261297]: 2026-01-26 17:31:00.234463733 +0000 UTC m=+0.107848846 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 26 17:31:00 compute-0 podman[261295]: 2026-01-26 17:31:00.267921233 +0000 UTC m=+0.146176228 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:31:01 compute-0 nova_compute[185389]: 2026-01-26 17:31:01.165 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:01 compute-0 openstack_network_exporter[204387]: ERROR   17:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:31:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:31:01 compute-0 openstack_network_exporter[204387]: ERROR   17:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:31:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:31:01.791 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:31:01.792 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:31:01.793 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:04 compute-0 nova_compute[185389]: 2026-01-26 17:31:04.910 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:06 compute-0 nova_compute[185389]: 2026-01-26 17:31:06.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:09 compute-0 nova_compute[185389]: 2026-01-26 17:31:09.915 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:11 compute-0 nova_compute[185389]: 2026-01-26 17:31:11.170 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:14 compute-0 nova_compute[185389]: 2026-01-26 17:31:14.920 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:16 compute-0 nova_compute[185389]: 2026-01-26 17:31:16.172 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:19 compute-0 nova_compute[185389]: 2026-01-26 17:31:19.924 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:20 compute-0 nova_compute[185389]: 2026-01-26 17:31:20.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:20 compute-0 nova_compute[185389]: 2026-01-26 17:31:20.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:20 compute-0 nova_compute[185389]: 2026-01-26 17:31:20.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:31:21 compute-0 nova_compute[185389]: 2026-01-26 17:31:21.174 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:21 compute-0 nova_compute[185389]: 2026-01-26 17:31:21.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:22 compute-0 podman[261358]: 2026-01-26 17:31:22.257542867 +0000 UTC m=+0.110152079 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:31:22 compute-0 nova_compute[185389]: 2026-01-26 17:31:22.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:23 compute-0 nova_compute[185389]: 2026-01-26 17:31:23.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:23 compute-0 nova_compute[185389]: 2026-01-26 17:31:23.719 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:31:24 compute-0 nova_compute[185389]: 2026-01-26 17:31:24.814 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:31:24 compute-0 nova_compute[185389]: 2026-01-26 17:31:24.815 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:31:24 compute-0 nova_compute[185389]: 2026-01-26 17:31:24.815 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:31:24 compute-0 nova_compute[185389]: 2026-01-26 17:31:24.929 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:25 compute-0 podman[261382]: 2026-01-26 17:31:25.240443647 +0000 UTC m=+0.096134718 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:31:25 compute-0 podman[261392]: 2026-01-26 17:31:25.251469446 +0000 UTC m=+0.083079531 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:31:25 compute-0 podman[261385]: 2026-01-26 17:31:25.254369296 +0000 UTC m=+0.094095762 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 17:31:25 compute-0 podman[261381]: 2026-01-26 17:31:25.282573083 +0000 UTC m=+0.145003797 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 26 17:31:26 compute-0 nova_compute[185389]: 2026-01-26 17:31:26.110 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:31:26 compute-0 nova_compute[185389]: 2026-01-26 17:31:26.134 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:31:26 compute-0 nova_compute[185389]: 2026-01-26 17:31:26.135 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:31:26 compute-0 nova_compute[185389]: 2026-01-26 17:31:26.136 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:26 compute-0 nova_compute[185389]: 2026-01-26 17:31:26.178 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:29 compute-0 podman[201244]: time="2026-01-26T17:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:31:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:31:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 26 17:31:29 compute-0 nova_compute[185389]: 2026-01-26 17:31:29.941 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.181 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:31 compute-0 podman[261460]: 2026-01-26 17:31:31.196896311 +0000 UTC m=+0.073406268 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:31 compute-0 podman[261461]: 2026-01-26 17:31:31.211463937 +0000 UTC m=+0.082995189 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Jan 26 17:31:31 compute-0 podman[261459]: 2026-01-26 17:31:31.233526458 +0000 UTC m=+0.117606081 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:31:31 compute-0 openstack_network_exporter[204387]: ERROR   17:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:31:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:31:31 compute-0 openstack_network_exporter[204387]: ERROR   17:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:31:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.770 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.771 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.771 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.772 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.856 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.943 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:31 compute-0 nova_compute[185389]: 2026-01-26 17:31:31.945 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.013 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.036 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.101 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.102 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.164 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.544 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.546 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.27893447875977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.546 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.546 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.636 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.636 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.637 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.637 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.710 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.797 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.811 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:31:32 compute-0 nova_compute[185389]: 2026-01-26 17:31:32.812 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:34 compute-0 nova_compute[185389]: 2026-01-26 17:31:34.945 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:36 compute-0 nova_compute[185389]: 2026-01-26 17:31:36.182 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:39 compute-0 nova_compute[185389]: 2026-01-26 17:31:39.948 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:40 compute-0 nova_compute[185389]: 2026-01-26 17:31:40.808 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:40 compute-0 nova_compute[185389]: 2026-01-26 17:31:40.809 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:41 compute-0 nova_compute[185389]: 2026-01-26 17:31:41.185 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:44 compute-0 nova_compute[185389]: 2026-01-26 17:31:44.951 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:46 compute-0 nova_compute[185389]: 2026-01-26 17:31:46.187 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:49 compute-0 nova_compute[185389]: 2026-01-26 17:31:49.953 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:51 compute-0 nova_compute[185389]: 2026-01-26 17:31:51.188 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:53 compute-0 podman[261534]: 2026-01-26 17:31:53.168835764 +0000 UTC m=+0.063288023 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:31:54 compute-0 nova_compute[185389]: 2026-01-26 17:31:54.956 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:56 compute-0 nova_compute[185389]: 2026-01-26 17:31:56.190 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:56 compute-0 podman[261556]: 2026-01-26 17:31:56.196207684 +0000 UTC m=+0.085997741 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 26 17:31:56 compute-0 podman[261558]: 2026-01-26 17:31:56.196890932 +0000 UTC m=+0.077225602 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:56 compute-0 podman[261559]: 2026-01-26 17:31:56.209819615 +0000 UTC m=+0.086557357 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:31:56 compute-0 podman[261557]: 2026-01-26 17:31:56.219731124 +0000 UTC m=+0.101693098 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:31:59 compute-0 podman[201244]: time="2026-01-26T17:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:31:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:31:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 26 17:31:59 compute-0 nova_compute[185389]: 2026-01-26 17:31:59.964 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:01 compute-0 nova_compute[185389]: 2026-01-26 17:32:01.193 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:01 compute-0 openstack_network_exporter[204387]: ERROR   17:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:32:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:32:01 compute-0 openstack_network_exporter[204387]: ERROR   17:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:32:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:32:01.794 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:32:01.795 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:32:01.795 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:02 compute-0 podman[261633]: 2026-01-26 17:32:02.198336151 +0000 UTC m=+0.082723931 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_id=kepler, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 26 17:32:02 compute-0 podman[261632]: 2026-01-26 17:32:02.212751533 +0000 UTC m=+0.100935557 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:32:02 compute-0 podman[261631]: 2026-01-26 17:32:02.225795649 +0000 UTC m=+0.118412213 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:32:04 compute-0 nova_compute[185389]: 2026-01-26 17:32:04.969 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:06 compute-0 nova_compute[185389]: 2026-01-26 17:32:06.196 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:09 compute-0 nova_compute[185389]: 2026-01-26 17:32:09.974 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:11 compute-0 nova_compute[185389]: 2026-01-26 17:32:11.200 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:14 compute-0 nova_compute[185389]: 2026-01-26 17:32:14.978 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:16 compute-0 nova_compute[185389]: 2026-01-26 17:32:16.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:19 compute-0 nova_compute[185389]: 2026-01-26 17:32:19.983 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:21 compute-0 nova_compute[185389]: 2026-01-26 17:32:21.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:21 compute-0 nova_compute[185389]: 2026-01-26 17:32:21.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:21 compute-0 nova_compute[185389]: 2026-01-26 17:32:21.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:21 compute-0 nova_compute[185389]: 2026-01-26 17:32:21.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:32:22 compute-0 nova_compute[185389]: 2026-01-26 17:32:22.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:22 compute-0 nova_compute[185389]: 2026-01-26 17:32:22.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:23 compute-0 nova_compute[185389]: 2026-01-26 17:32:23.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:24 compute-0 podman[261695]: 2026-01-26 17:32:24.21383173 +0000 UTC m=+0.089145547 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:32:24 compute-0 nova_compute[185389]: 2026-01-26 17:32:24.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:24 compute-0 nova_compute[185389]: 2026-01-26 17:32:24.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:32:24 compute-0 nova_compute[185389]: 2026-01-26 17:32:24.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:32:24 compute-0 nova_compute[185389]: 2026-01-26 17:32:24.988 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:25 compute-0 nova_compute[185389]: 2026-01-26 17:32:25.623 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:32:25 compute-0 nova_compute[185389]: 2026-01-26 17:32:25.624 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:32:25 compute-0 nova_compute[185389]: 2026-01-26 17:32:25.624 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:32:25 compute-0 nova_compute[185389]: 2026-01-26 17:32:25.625 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:32:26 compute-0 nova_compute[185389]: 2026-01-26 17:32:26.209 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:27 compute-0 podman[261718]: 2026-01-26 17:32:27.205871208 +0000 UTC m=+0.091934323 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Jan 26 17:32:27 compute-0 podman[261726]: 2026-01-26 17:32:27.212420586 +0000 UTC m=+0.080640465 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:32:27 compute-0 podman[261720]: 2026-01-26 17:32:27.230794886 +0000 UTC m=+0.102884741 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:32:27 compute-0 podman[261719]: 2026-01-26 17:32:27.237245182 +0000 UTC m=+0.115474964 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 26 17:32:28 compute-0 nova_compute[185389]: 2026-01-26 17:32:28.550 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:32:28 compute-0 nova_compute[185389]: 2026-01-26 17:32:28.565 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:32:28 compute-0 nova_compute[185389]: 2026-01-26 17:32:28.566 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:32:29 compute-0 podman[201244]: time="2026-01-26T17:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:32:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:32:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 26 17:32:29 compute-0 nova_compute[185389]: 2026-01-26 17:32:29.997 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:31 compute-0 nova_compute[185389]: 2026-01-26 17:32:31.213 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.362 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.362 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.362 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.375 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'name': 'te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.379 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.379 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:32:31.380202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 openstack_network_exporter[204387]: ERROR   17:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:32:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:32:31 compute-0 openstack_network_exporter[204387]: ERROR   17:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:32:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.441 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.442 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.484 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.484 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.485 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 6607735993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.486 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.486 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 17039406984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.486 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.487 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.488 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.488 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:32:31.485634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:32:31.487510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:32:31.489456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.494 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.498 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:32:31.499582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.521 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/cpu volume: 282640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.539 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 336520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.540 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.540 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:32:31.540093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.542 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:32:31.541308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:32:31.542360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.548 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/memory.usage volume: 43.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.549 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 42.2578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.550 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.551 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.552 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:32:31.544255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:32:31.545400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:32:31.546242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:32:31.547297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:32:31.548524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:32:31.549523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:32:31.550578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:32:31.551669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:32:31.552708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:32:31.554036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.568 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.568 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.582 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.582 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.583 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.584 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.584 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.585 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:32:31.583501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.586 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.586 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 30358016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.586 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.587 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 407670116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.588 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 56361248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.588 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 464695240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.588 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 61571959 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.589 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.590 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.590 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:32:31.585725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.591 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.593 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.593 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.593 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:32:31.587752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.594 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:32:31.589596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:32:31.591217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:32:31.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:32:31.593208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:32:33 compute-0 podman[261798]: 2026-01-26 17:32:33.192035761 +0000 UTC m=+0.075915406 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:32:33 compute-0 podman[261799]: 2026-01-26 17:32:33.221492232 +0000 UTC m=+0.098451859 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Jan 26 17:32:33 compute-0 podman[261797]: 2026-01-26 17:32:33.252120646 +0000 UTC m=+0.140441133 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.820 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.821 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.821 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.821 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:32:33 compute-0 nova_compute[185389]: 2026-01-26 17:32:33.949 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.019 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.030 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.101 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.109 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.177 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.178 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.243 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.646 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.648 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4840MB free_disk=72.28514099121094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.648 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.649 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.927 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.927 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.928 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.928 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:32:34 compute-0 nova_compute[185389]: 2026-01-26 17:32:34.994 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:32:35 compute-0 nova_compute[185389]: 2026-01-26 17:32:35.000 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:35 compute-0 nova_compute[185389]: 2026-01-26 17:32:35.023 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:32:35 compute-0 nova_compute[185389]: 2026-01-26 17:32:35.025 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:32:35 compute-0 nova_compute[185389]: 2026-01-26 17:32:35.025 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:36 compute-0 nova_compute[185389]: 2026-01-26 17:32:36.216 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:40 compute-0 nova_compute[185389]: 2026-01-26 17:32:40.004 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:41 compute-0 nova_compute[185389]: 2026-01-26 17:32:41.218 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:42 compute-0 nova_compute[185389]: 2026-01-26 17:32:42.021 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:42 compute-0 nova_compute[185389]: 2026-01-26 17:32:42.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:45 compute-0 nova_compute[185389]: 2026-01-26 17:32:45.009 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:46 compute-0 nova_compute[185389]: 2026-01-26 17:32:46.220 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:46 compute-0 nova_compute[185389]: 2026-01-26 17:32:46.715 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:50 compute-0 nova_compute[185389]: 2026-01-26 17:32:50.014 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:51 compute-0 nova_compute[185389]: 2026-01-26 17:32:51.223 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:55 compute-0 nova_compute[185389]: 2026-01-26 17:32:55.019 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:55 compute-0 podman[261871]: 2026-01-26 17:32:55.215445885 +0000 UTC m=+0.102173852 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 26 17:32:56 compute-0 nova_compute[185389]: 2026-01-26 17:32:56.226 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:58 compute-0 podman[261897]: 2026-01-26 17:32:58.196088492 +0000 UTC m=+0.067443847 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:32:58 compute-0 podman[261895]: 2026-01-26 17:32:58.204779049 +0000 UTC m=+0.091402329 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=openstack_network_exporter, container_name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 26 17:32:58 compute-0 podman[261896]: 2026-01-26 17:32:58.208251183 +0000 UTC m=+0.088258663 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:32:58 compute-0 podman[261904]: 2026-01-26 17:32:58.221259397 +0000 UTC m=+0.082541307 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:32:59 compute-0 podman[201244]: time="2026-01-26T17:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:32:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:32:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 26 17:33:00 compute-0 nova_compute[185389]: 2026-01-26 17:33:00.024 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:01 compute-0 nova_compute[185389]: 2026-01-26 17:33:01.228 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:01 compute-0 openstack_network_exporter[204387]: ERROR   17:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:33:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:33:01 compute-0 openstack_network_exporter[204387]: ERROR   17:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:33:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:33:01.797 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:33:01.798 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:33:01.799 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:04 compute-0 podman[261981]: 2026-01-26 17:33:04.211782989 +0000 UTC m=+0.079501524 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler)
Jan 26 17:33:04 compute-0 podman[261977]: 2026-01-26 17:33:04.235325099 +0000 UTC m=+0.111906536 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 17:33:04 compute-0 podman[261976]: 2026-01-26 17:33:04.254661885 +0000 UTC m=+0.138407197 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 17:33:05 compute-0 nova_compute[185389]: 2026-01-26 17:33:05.028 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:06 compute-0 nova_compute[185389]: 2026-01-26 17:33:06.232 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:10 compute-0 nova_compute[185389]: 2026-01-26 17:33:10.033 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:11 compute-0 nova_compute[185389]: 2026-01-26 17:33:11.236 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:15 compute-0 nova_compute[185389]: 2026-01-26 17:33:15.038 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:16 compute-0 nova_compute[185389]: 2026-01-26 17:33:16.240 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:20 compute-0 nova_compute[185389]: 2026-01-26 17:33:20.042 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:21 compute-0 nova_compute[185389]: 2026-01-26 17:33:21.244 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:21 compute-0 nova_compute[185389]: 2026-01-26 17:33:21.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:21 compute-0 nova_compute[185389]: 2026-01-26 17:33:21.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:33:23 compute-0 nova_compute[185389]: 2026-01-26 17:33:23.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:23 compute-0 nova_compute[185389]: 2026-01-26 17:33:23.723 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:24 compute-0 nova_compute[185389]: 2026-01-26 17:33:24.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:25 compute-0 nova_compute[185389]: 2026-01-26 17:33:25.047 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:25 compute-0 nova_compute[185389]: 2026-01-26 17:33:25.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:25 compute-0 nova_compute[185389]: 2026-01-26 17:33:25.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:33:26 compute-0 podman[262044]: 2026-01-26 17:33:26.213586754 +0000 UTC m=+0.079483514 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:33:26 compute-0 nova_compute[185389]: 2026-01-26 17:33:26.248 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:26 compute-0 nova_compute[185389]: 2026-01-26 17:33:26.721 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:33:26 compute-0 nova_compute[185389]: 2026-01-26 17:33:26.721 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:33:26 compute-0 nova_compute[185389]: 2026-01-26 17:33:26.721 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:33:29 compute-0 podman[262070]: 2026-01-26 17:33:29.210541509 +0000 UTC m=+0.078296342 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 26 17:33:29 compute-0 podman[262068]: 2026-01-26 17:33:29.212938384 +0000 UTC m=+0.092319523 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:33:29 compute-0 podman[262067]: 2026-01-26 17:33:29.23300776 +0000 UTC m=+0.118971909 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, version=9.6)
Jan 26 17:33:29 compute-0 podman[262069]: 2026-01-26 17:33:29.233190985 +0000 UTC m=+0.107731033 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:33:29 compute-0 nova_compute[185389]: 2026-01-26 17:33:29.505 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:33:29 compute-0 nova_compute[185389]: 2026-01-26 17:33:29.537 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:33:29 compute-0 nova_compute[185389]: 2026-01-26 17:33:29.537 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:33:29 compute-0 nova_compute[185389]: 2026-01-26 17:33:29.538 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:29 compute-0 podman[201244]: time="2026-01-26T17:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:33:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:33:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 26 17:33:30 compute-0 nova_compute[185389]: 2026-01-26 17:33:30.049 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:31 compute-0 nova_compute[185389]: 2026-01-26 17:33:31.250 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:31 compute-0 openstack_network_exporter[204387]: ERROR   17:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:33:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:33:31 compute-0 openstack_network_exporter[204387]: ERROR   17:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:33:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:33:31 compute-0 nova_compute[185389]: 2026-01-26 17:33:31.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:31 compute-0 nova_compute[185389]: 2026-01-26 17:33:31.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:33:31 compute-0 nova_compute[185389]: 2026-01-26 17:33:31.773 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.051 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:35 compute-0 podman[262148]: 2026-01-26 17:33:35.196582146 +0000 UTC m=+0.077808468 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:33:35 compute-0 podman[262147]: 2026-01-26 17:33:35.215402328 +0000 UTC m=+0.099628261 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Jan 26 17:33:35 compute-0 podman[262146]: 2026-01-26 17:33:35.260112265 +0000 UTC m=+0.148738869 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.775 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.801 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.801 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.802 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.802 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.893 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.964 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:35 compute-0 nova_compute[185389]: 2026-01-26 17:33:35.965 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.028 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.036 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.099 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.100 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.168 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.253 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.577 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.579 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4851MB free_disk=72.28510284423828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.580 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.580 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.675 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.675 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.676 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.676 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.701 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.730 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.730 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.750 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.776 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:33:36 compute-0 nova_compute[185389]: 2026-01-26 17:33:36.842 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:33:37 compute-0 nova_compute[185389]: 2026-01-26 17:33:37.074 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:33:37 compute-0 nova_compute[185389]: 2026-01-26 17:33:37.075 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:33:37 compute-0 nova_compute[185389]: 2026-01-26 17:33:37.076 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:37 compute-0 nova_compute[185389]: 2026-01-26 17:33:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:37 compute-0 nova_compute[185389]: 2026-01-26 17:33:37.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:33:40 compute-0 nova_compute[185389]: 2026-01-26 17:33:40.057 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:41 compute-0 nova_compute[185389]: 2026-01-26 17:33:41.257 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:41 compute-0 nova_compute[185389]: 2026-01-26 17:33:41.729 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:42 compute-0 nova_compute[185389]: 2026-01-26 17:33:42.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:45 compute-0 nova_compute[185389]: 2026-01-26 17:33:45.060 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:46 compute-0 nova_compute[185389]: 2026-01-26 17:33:46.260 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:50 compute-0 nova_compute[185389]: 2026-01-26 17:33:50.064 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:51 compute-0 nova_compute[185389]: 2026-01-26 17:33:51.263 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:55 compute-0 nova_compute[185389]: 2026-01-26 17:33:55.068 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:56 compute-0 nova_compute[185389]: 2026-01-26 17:33:56.266 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:57 compute-0 podman[262220]: 2026-01-26 17:33:57.187787758 +0000 UTC m=+0.074599271 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:33:59 compute-0 podman[201244]: time="2026-01-26T17:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:33:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:33:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 26 17:34:00 compute-0 nova_compute[185389]: 2026-01-26 17:34:00.074 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:00 compute-0 podman[262243]: 2026-01-26 17:34:00.177421851 +0000 UTC m=+0.065408562 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Jan 26 17:34:00 compute-0 podman[262245]: 2026-01-26 17:34:00.190414434 +0000 UTC m=+0.067119328 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:34:00 compute-0 podman[262244]: 2026-01-26 17:34:00.199913263 +0000 UTC m=+0.082332242 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:34:00 compute-0 podman[262249]: 2026-01-26 17:34:00.205463404 +0000 UTC m=+0.078887428 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:34:01 compute-0 nova_compute[185389]: 2026-01-26 17:34:01.270 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:01 compute-0 openstack_network_exporter[204387]: ERROR   17:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:34:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:34:01 compute-0 openstack_network_exporter[204387]: ERROR   17:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:34:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:34:01.798 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:34:01.799 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:34:01.800 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:05 compute-0 nova_compute[185389]: 2026-01-26 17:34:05.078 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:06 compute-0 podman[262321]: 2026-01-26 17:34:06.22621776 +0000 UTC m=+0.096928609 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:34:06 compute-0 podman[262320]: 2026-01-26 17:34:06.248324541 +0000 UTC m=+0.131182280 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:34:06 compute-0 podman[262326]: 2026-01-26 17:34:06.266854336 +0000 UTC m=+0.125631150 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 17:34:06 compute-0 nova_compute[185389]: 2026-01-26 17:34:06.271 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:06 compute-0 nova_compute[185389]: 2026-01-26 17:34:06.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:10 compute-0 nova_compute[185389]: 2026-01-26 17:34:10.084 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:11 compute-0 nova_compute[185389]: 2026-01-26 17:34:11.274 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:15 compute-0 nova_compute[185389]: 2026-01-26 17:34:15.088 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:16 compute-0 nova_compute[185389]: 2026-01-26 17:34:16.314 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:20 compute-0 nova_compute[185389]: 2026-01-26 17:34:20.091 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:21 compute-0 nova_compute[185389]: 2026-01-26 17:34:21.281 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:23 compute-0 nova_compute[185389]: 2026-01-26 17:34:23.732 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:23 compute-0 nova_compute[185389]: 2026-01-26 17:34:23.732 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:23 compute-0 nova_compute[185389]: 2026-01-26 17:34:23.733 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:34:24 compute-0 nova_compute[185389]: 2026-01-26 17:34:24.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:25 compute-0 nova_compute[185389]: 2026-01-26 17:34:25.095 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:25 compute-0 nova_compute[185389]: 2026-01-26 17:34:25.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:26 compute-0 nova_compute[185389]: 2026-01-26 17:34:26.286 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:26 compute-0 nova_compute[185389]: 2026-01-26 17:34:26.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:26 compute-0 nova_compute[185389]: 2026-01-26 17:34:26.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:34:26 compute-0 nova_compute[185389]: 2026-01-26 17:34:26.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:34:27 compute-0 nova_compute[185389]: 2026-01-26 17:34:27.008 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:34:27 compute-0 nova_compute[185389]: 2026-01-26 17:34:27.009 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:34:27 compute-0 nova_compute[185389]: 2026-01-26 17:34:27.009 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:34:27 compute-0 nova_compute[185389]: 2026-01-26 17:34:27.010 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:34:28 compute-0 podman[262383]: 2026-01-26 17:34:28.22729543 +0000 UTC m=+0.097573476 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:34:29 compute-0 nova_compute[185389]: 2026-01-26 17:34:29.014 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:34:29 compute-0 nova_compute[185389]: 2026-01-26 17:34:29.028 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:34:29 compute-0 nova_compute[185389]: 2026-01-26 17:34:29.029 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:34:29 compute-0 nova_compute[185389]: 2026-01-26 17:34:29.029 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:29 compute-0 podman[201244]: time="2026-01-26T17:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:34:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:34:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 26 17:34:30 compute-0 nova_compute[185389]: 2026-01-26 17:34:30.101 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:31 compute-0 podman[262408]: 2026-01-26 17:34:31.241574424 +0000 UTC m=+0.107321602 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:34:31 compute-0 podman[262407]: 2026-01-26 17:34:31.246496507 +0000 UTC m=+0.116356697 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:34:31 compute-0 podman[262409]: 2026-01-26 17:34:31.256465939 +0000 UTC m=+0.096309602 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:34:31 compute-0 podman[262410]: 2026-01-26 17:34:31.266173742 +0000 UTC m=+0.111346780 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:34:31 compute-0 nova_compute[185389]: 2026-01-26 17:34:31.289 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.362 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.363 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.363 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cfa236e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.371 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'name': 'te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.375 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:34:31.376292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 openstack_network_exporter[204387]: ERROR   17:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:34:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:34:31 compute-0 openstack_network_exporter[204387]: ERROR   17:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:34:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.423 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.424 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.466 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.467 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 6652155448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.468 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.469 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 17039406984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.469 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:34:31.468365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:34:31.470598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.471 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.471 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.471 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.472 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:34:31.472334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.476 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.478 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:34:31.479999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.502 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/cpu volume: 334510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.527 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 337880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.528 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.529 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:34:31.528777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:34:31.530455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.531 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.532 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:34:31.531734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.532 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:34:31.533169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:34:31.534577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.535 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:34:31.535713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:34:31.537043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.538 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.539 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:34:31.538527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/memory.usage volume: 42.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 42.2578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:34:31.539930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:34:31.541384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.541 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.542 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:34:31.542584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.543 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.544 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:34:31.543779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.544 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:34:31.545146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.557 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.558 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.571 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.572 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.573 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.573 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.573 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:34:31.572783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.574 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 30358016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:34:31.574363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.576 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 431683055 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:34:31.575928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.576 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 63042330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.576 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 464695240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.576 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 61571959 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.577 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.578 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.580 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.580 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.580 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.580 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:34:31.577401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:34:31.578600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:34:31.580044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:34:31.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:34:33 compute-0 sshd-session[262492]: banner exchange: Connection from 20.14.93.87 port 53170: invalid format
Jan 26 17:34:35 compute-0 nova_compute[185389]: 2026-01-26 17:34:35.105 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:36 compute-0 nova_compute[185389]: 2026-01-26 17:34:36.293 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:37 compute-0 podman[262495]: 2026-01-26 17:34:37.223846561 +0000 UTC m=+0.097017891 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler)
Jan 26 17:34:37 compute-0 podman[262494]: 2026-01-26 17:34:37.228052066 +0000 UTC m=+0.090824733 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:34:37 compute-0 podman[262493]: 2026-01-26 17:34:37.242464588 +0000 UTC m=+0.126159143 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.763 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.764 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.764 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.764 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.846 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.926 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:37 compute-0 nova_compute[185389]: 2026-01-26 17:34:37.928 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.018 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.026 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.091 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.094 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.174 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.540 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.542 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=72.28510284423828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.542 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.543 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.629 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.630 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.630 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.631 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.706 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.719 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.721 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:34:38 compute-0 nova_compute[185389]: 2026-01-26 17:34:38.721 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:40 compute-0 nova_compute[185389]: 2026-01-26 17:34:40.109 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:41 compute-0 nova_compute[185389]: 2026-01-26 17:34:41.299 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:42 compute-0 sshd-session[262490]: Connection closed by 20.14.93.87 port 53166 [preauth]
Jan 26 17:34:44 compute-0 nova_compute[185389]: 2026-01-26 17:34:44.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:44 compute-0 nova_compute[185389]: 2026-01-26 17:34:44.717 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:45 compute-0 nova_compute[185389]: 2026-01-26 17:34:45.113 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:46 compute-0 nova_compute[185389]: 2026-01-26 17:34:46.303 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:46 compute-0 nova_compute[185389]: 2026-01-26 17:34:46.716 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:50 compute-0 nova_compute[185389]: 2026-01-26 17:34:50.117 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:51 compute-0 nova_compute[185389]: 2026-01-26 17:34:51.306 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:55 compute-0 nova_compute[185389]: 2026-01-26 17:34:55.120 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:56 compute-0 nova_compute[185389]: 2026-01-26 17:34:56.308 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:59 compute-0 podman[262566]: 2026-01-26 17:34:59.227697014 +0000 UTC m=+0.113156561 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:34:59 compute-0 podman[201244]: time="2026-01-26T17:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:34:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:34:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 26 17:35:00 compute-0 nova_compute[185389]: 2026-01-26 17:35:00.125 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:01 compute-0 nova_compute[185389]: 2026-01-26 17:35:01.311 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:01 compute-0 openstack_network_exporter[204387]: ERROR   17:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:35:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:35:01 compute-0 openstack_network_exporter[204387]: ERROR   17:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:35:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:35:01.800 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:35:01.801 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:35:01.803 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:02 compute-0 podman[262590]: 2026-01-26 17:35:02.217877563 +0000 UTC m=+0.087166013 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Jan 26 17:35:02 compute-0 podman[262592]: 2026-01-26 17:35:02.223419434 +0000 UTC m=+0.082550997 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:35:02 compute-0 podman[262591]: 2026-01-26 17:35:02.256151075 +0000 UTC m=+0.115377661 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 26 17:35:02 compute-0 podman[262597]: 2026-01-26 17:35:02.256423882 +0000 UTC m=+0.102361037 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:35:05 compute-0 nova_compute[185389]: 2026-01-26 17:35:05.128 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:06 compute-0 nova_compute[185389]: 2026-01-26 17:35:06.314 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:08 compute-0 podman[262667]: 2026-01-26 17:35:08.223913669 +0000 UTC m=+0.093302209 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:35:08 compute-0 podman[262668]: 2026-01-26 17:35:08.253058473 +0000 UTC m=+0.102983484 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, config_id=kepler, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, release=1214.1726694543)
Jan 26 17:35:08 compute-0 podman[262666]: 2026-01-26 17:35:08.289345319 +0000 UTC m=+0.161704110 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 17:35:10 compute-0 nova_compute[185389]: 2026-01-26 17:35:10.130 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:11 compute-0 nova_compute[185389]: 2026-01-26 17:35:11.316 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:15 compute-0 nova_compute[185389]: 2026-01-26 17:35:15.135 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:16 compute-0 nova_compute[185389]: 2026-01-26 17:35:16.318 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:20 compute-0 nova_compute[185389]: 2026-01-26 17:35:20.141 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:21 compute-0 nova_compute[185389]: 2026-01-26 17:35:21.320 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:23 compute-0 nova_compute[185389]: 2026-01-26 17:35:23.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:24 compute-0 nova_compute[185389]: 2026-01-26 17:35:24.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:24 compute-0 nova_compute[185389]: 2026-01-26 17:35:24.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:24 compute-0 nova_compute[185389]: 2026-01-26 17:35:24.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:35:25 compute-0 nova_compute[185389]: 2026-01-26 17:35:25.147 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:25 compute-0 nova_compute[185389]: 2026-01-26 17:35:25.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:26 compute-0 nova_compute[185389]: 2026-01-26 17:35:26.324 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:26 compute-0 nova_compute[185389]: 2026-01-26 17:35:26.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:26 compute-0 nova_compute[185389]: 2026-01-26 17:35:26.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:35:28 compute-0 sshd-session[262729]: Invalid user ubuntu from 103.42.57.146 port 52530
Jan 26 17:35:28 compute-0 sshd-session[262729]: Received disconnect from 103.42.57.146 port 52530:11:  [preauth]
Jan 26 17:35:28 compute-0 sshd-session[262729]: Disconnected from invalid user ubuntu 103.42.57.146 port 52530 [preauth]
Jan 26 17:35:29 compute-0 nova_compute[185389]: 2026-01-26 17:35:29.035 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:35:29 compute-0 nova_compute[185389]: 2026-01-26 17:35:29.036 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:35:29 compute-0 nova_compute[185389]: 2026-01-26 17:35:29.037 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:35:29 compute-0 podman[201244]: time="2026-01-26T17:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:35:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:35:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 26 17:35:30 compute-0 nova_compute[185389]: 2026-01-26 17:35:30.151 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:30 compute-0 podman[262732]: 2026-01-26 17:35:30.236135417 +0000 UTC m=+0.108888004 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:35:31 compute-0 nova_compute[185389]: 2026-01-26 17:35:31.327 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:31 compute-0 openstack_network_exporter[204387]: ERROR   17:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:35:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:35:31 compute-0 openstack_network_exporter[204387]: ERROR   17:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:35:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:35:32 compute-0 nova_compute[185389]: 2026-01-26 17:35:32.406 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:35:32 compute-0 nova_compute[185389]: 2026-01-26 17:35:32.436 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:35:32 compute-0 nova_compute[185389]: 2026-01-26 17:35:32.436 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:35:32 compute-0 nova_compute[185389]: 2026-01-26 17:35:32.437 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:33 compute-0 podman[262755]: 2026-01-26 17:35:33.227324684 +0000 UTC m=+0.106274872 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, version=9.6, architecture=x86_64)
Jan 26 17:35:33 compute-0 podman[262756]: 2026-01-26 17:35:33.245827898 +0000 UTC m=+0.123993935 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120)
Jan 26 17:35:33 compute-0 podman[262757]: 2026-01-26 17:35:33.253450055 +0000 UTC m=+0.120889350 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:35:33 compute-0 podman[262758]: 2026-01-26 17:35:33.25475713 +0000 UTC m=+0.111128374 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:35:35 compute-0 nova_compute[185389]: 2026-01-26 17:35:35.154 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:36 compute-0 nova_compute[185389]: 2026-01-26 17:35:36.330 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.770 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.771 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.771 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.771 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.881 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.959 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:37 compute-0 nova_compute[185389]: 2026-01-26 17:35:37.960 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.073 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.086 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.157 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.159 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.233 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.631 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.633 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=72.28514862060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.633 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.634 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.840 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.841 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.841 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:35:38 compute-0 nova_compute[185389]: 2026-01-26 17:35:38.842 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:35:39 compute-0 nova_compute[185389]: 2026-01-26 17:35:39.219 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:35:39 compute-0 podman[262847]: 2026-01-26 17:35:39.222398192 +0000 UTC m=+0.088685814 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 26 17:35:39 compute-0 podman[262846]: 2026-01-26 17:35:39.233254577 +0000 UTC m=+0.104498395 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 26 17:35:39 compute-0 nova_compute[185389]: 2026-01-26 17:35:39.235 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:35:39 compute-0 nova_compute[185389]: 2026-01-26 17:35:39.236 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:35:39 compute-0 nova_compute[185389]: 2026-01-26 17:35:39.237 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:39 compute-0 podman[262845]: 2026-01-26 17:35:39.272218827 +0000 UTC m=+0.150377772 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:35:40 compute-0 nova_compute[185389]: 2026-01-26 17:35:40.158 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:41 compute-0 nova_compute[185389]: 2026-01-26 17:35:41.333 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:45 compute-0 nova_compute[185389]: 2026-01-26 17:35:45.163 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:46 compute-0 nova_compute[185389]: 2026-01-26 17:35:46.336 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:47 compute-0 nova_compute[185389]: 2026-01-26 17:35:47.232 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:47 compute-0 nova_compute[185389]: 2026-01-26 17:35:47.233 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:50 compute-0 nova_compute[185389]: 2026-01-26 17:35:50.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:51 compute-0 nova_compute[185389]: 2026-01-26 17:35:51.339 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:55 compute-0 nova_compute[185389]: 2026-01-26 17:35:55.172 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:56 compute-0 nova_compute[185389]: 2026-01-26 17:35:56.341 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:59 compute-0 podman[201244]: time="2026-01-26T17:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:35:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:35:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 26 17:36:00 compute-0 nova_compute[185389]: 2026-01-26 17:36:00.178 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:01 compute-0 podman[262905]: 2026-01-26 17:36:01.203266227 +0000 UTC m=+0.076111652 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:36:01 compute-0 nova_compute[185389]: 2026-01-26 17:36:01.344 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:01 compute-0 openstack_network_exporter[204387]: ERROR   17:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:36:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:36:01 compute-0 openstack_network_exporter[204387]: ERROR   17:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:36:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:36:01.801 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:36:01.802 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:36:01.804 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:04 compute-0 podman[262930]: 2026-01-26 17:36:04.224910946 +0000 UTC m=+0.086227488 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:36:04 compute-0 podman[262929]: 2026-01-26 17:36:04.227905367 +0000 UTC m=+0.088199812 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:36:04 compute-0 podman[262927]: 2026-01-26 17:36:04.236679206 +0000 UTC m=+0.098867493 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 26 17:36:04 compute-0 podman[262928]: 2026-01-26 17:36:04.263842414 +0000 UTC m=+0.122844593 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 26 17:36:05 compute-0 nova_compute[185389]: 2026-01-26 17:36:05.182 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:06 compute-0 nova_compute[185389]: 2026-01-26 17:36:06.347 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:10 compute-0 nova_compute[185389]: 2026-01-26 17:36:10.185 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:10 compute-0 podman[263007]: 2026-01-26 17:36:10.230619298 +0000 UTC m=+0.100219077 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:36:10 compute-0 podman[263008]: 2026-01-26 17:36:10.26815322 +0000 UTC m=+0.125726222 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler)
Jan 26 17:36:10 compute-0 podman[263006]: 2026-01-26 17:36:10.294702302 +0000 UTC m=+0.158893244 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 26 17:36:11 compute-0 nova_compute[185389]: 2026-01-26 17:36:11.350 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:15 compute-0 nova_compute[185389]: 2026-01-26 17:36:15.190 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:18 compute-0 nova_compute[185389]: 2026-01-26 17:36:18.068 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:20 compute-0 nova_compute[185389]: 2026-01-26 17:36:20.194 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:23 compute-0 nova_compute[185389]: 2026-01-26 17:36:23.072 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:24 compute-0 nova_compute[185389]: 2026-01-26 17:36:24.722 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:25 compute-0 nova_compute[185389]: 2026-01-26 17:36:25.198 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:25 compute-0 nova_compute[185389]: 2026-01-26 17:36:25.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:25 compute-0 nova_compute[185389]: 2026-01-26 17:36:25.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:36:26 compute-0 nova_compute[185389]: 2026-01-26 17:36:26.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:26 compute-0 nova_compute[185389]: 2026-01-26 17:36:26.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:36:26 compute-0 nova_compute[185389]: 2026-01-26 17:36:26.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:36:27 compute-0 nova_compute[185389]: 2026-01-26 17:36:27.073 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:36:27 compute-0 nova_compute[185389]: 2026-01-26 17:36:27.074 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:36:27 compute-0 nova_compute[185389]: 2026-01-26 17:36:27.074 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:36:27 compute-0 nova_compute[185389]: 2026-01-26 17:36:27.075 185393 DEBUG nova.objects.instance [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.074 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.260 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [{"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.278 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-f9b0315f-2a3c-471e-b629-b19d90a40a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.279 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.280 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:28 compute-0 nova_compute[185389]: 2026-01-26 17:36:28.280 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:29 compute-0 podman[201244]: time="2026-01-26T17:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:36:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:36:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 26 17:36:30 compute-0 nova_compute[185389]: 2026-01-26 17:36:30.204 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:30 compute-0 nova_compute[185389]: 2026-01-26 17:36:30.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.363 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.364 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.369 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.369 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.369 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.370 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.370 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.370 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.370 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.371 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.371 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.372 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.372 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.373 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.374 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.378 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'name': 'te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.382 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'name': 'te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c', 'flavor': {'id': '8d013773-e8ea-4b83-a8e3-f58d9749637f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'a3153c85-d830-4fd6-8cd6-1a69e6723a9e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '237a863555d84bd386855d9cf781beb4', 'user_id': '5ca35c18e54b493f9efdfe2218cce3c7', 'hostId': 'd53ff20533f73aa1094f7d1b315e252b91e3e85487374d883e31cb42', 'status': 'active', 'metadata': {'metering.server_group': '21873820-28a9-4731-9256-efbf2eb46b4d'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-26T17:36:31.383368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 openstack_network_exporter[204387]: ERROR   17:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:36:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:36:31 compute-0 openstack_network_exporter[204387]: ERROR   17:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:36:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.430 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.431 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.475 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.476 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.478 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 6652155448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-26T17:36:31.477870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.478 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.478 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 17039406984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.480 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-26T17:36:31.479844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.480 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.480 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.481 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-26T17:36:31.482170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.487 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.491 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-26T17:36:31.493441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.514 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/cpu volume: 336000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.535 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/cpu volume: 339400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.536 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-26T17:36:31.536314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-26T17:36:31.537767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.539 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.539 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-26T17:36:31.538891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.540 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.542 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-26T17:36:31.540249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-26T17:36:31.541512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-26T17:36:31.542601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-26T17:36:31.543862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-26T17:36:31.545395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.545 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.546 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/memory.usage volume: 42.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/memory.usage volume: 42.2578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.548 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-26T17:36:31.546790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-26T17:36:31.548077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.548 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.549 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-26T17:36:31.549398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.550 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-26T17:36:31.550771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-26T17:36:31.552055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.569 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.570 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.585 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.586 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.586 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.587 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-26T17:36:31.587264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.588 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.588 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.588 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.590 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.590 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.590 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 30358016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.591 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.592 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 431683055 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-26T17:36:31.590043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-26T17:36:31.592710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.593 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.latency volume: 63042330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.593 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 464695240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.593 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.latency volume: 61571959 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.595 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-26T17:36:31.594836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.595 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-26T17:36:31.596253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.596 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.597 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 1091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.597 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.599 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.599 14 DEBUG ceilometer.compute.pollsters [-] e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-26T17:36:31.598865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.599 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.600 14 DEBUG ceilometer.compute.pollsters [-] f9b0315f-2a3c-471e-b629-b19d90a40a97/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:36:31.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:36:32 compute-0 podman[263068]: 2026-01-26 17:36:32.198996555 +0000 UTC m=+0.075633157 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:36:33 compute-0 nova_compute[185389]: 2026-01-26 17:36:33.076 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:35 compute-0 podman[263094]: 2026-01-26 17:36:35.196795643 +0000 UTC m=+0.070669061 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:36:35 compute-0 nova_compute[185389]: 2026-01-26 17:36:35.207 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:35 compute-0 podman[263092]: 2026-01-26 17:36:35.228322839 +0000 UTC m=+0.101619171 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Jan 26 17:36:35 compute-0 podman[263091]: 2026-01-26 17:36:35.235431643 +0000 UTC m=+0.112063706 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Jan 26 17:36:35 compute-0 podman[263093]: 2026-01-26 17:36:35.230251351 +0000 UTC m=+0.106608767 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.757 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.758 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.758 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.838 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.951 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:37 compute-0 nova_compute[185389]: 2026-01-26 17:36:37.953 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.025 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.040 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.079 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.106 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.107 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.193 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.610 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.612 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4869MB free_disk=72.28514862060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.613 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.613 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.730 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance f9b0315f-2a3c-471e-b629-b19d90a40a97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.730 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.731 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.731 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.823 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.847 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.848 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:36:38 compute-0 nova_compute[185389]: 2026-01-26 17:36:38.849 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:40 compute-0 nova_compute[185389]: 2026-01-26 17:36:40.212 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:41 compute-0 podman[263179]: 2026-01-26 17:36:41.242839317 +0000 UTC m=+0.104989972 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 26 17:36:41 compute-0 podman[263178]: 2026-01-26 17:36:41.255021931 +0000 UTC m=+0.118757569 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 26 17:36:41 compute-0 podman[263177]: 2026-01-26 17:36:41.285293042 +0000 UTC m=+0.156274490 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:36:43 compute-0 nova_compute[185389]: 2026-01-26 17:36:43.080 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:45 compute-0 nova_compute[185389]: 2026-01-26 17:36:45.215 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:46 compute-0 nova_compute[185389]: 2026-01-26 17:36:46.844 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:46 compute-0 nova_compute[185389]: 2026-01-26 17:36:46.845 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:47 compute-0 nova_compute[185389]: 2026-01-26 17:36:47.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:48 compute-0 nova_compute[185389]: 2026-01-26 17:36:48.083 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:50 compute-0 nova_compute[185389]: 2026-01-26 17:36:50.219 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:53 compute-0 nova_compute[185389]: 2026-01-26 17:36:53.086 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:55 compute-0 nova_compute[185389]: 2026-01-26 17:36:55.224 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:58 compute-0 nova_compute[185389]: 2026-01-26 17:36:58.088 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:58 compute-0 sshd-session[263238]: Invalid user admin from 176.120.22.13 port 41444
Jan 26 17:36:58 compute-0 sshd-session[263238]: Connection reset by invalid user admin 176.120.22.13 port 41444 [preauth]
Jan 26 17:36:59 compute-0 podman[201244]: time="2026-01-26T17:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:36:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:36:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 26 17:36:59 compute-0 sshd-session[263240]: Invalid user admin from 176.120.22.13 port 41462
Jan 26 17:36:59 compute-0 sshd-session[263240]: Connection reset by invalid user admin 176.120.22.13 port 41462 [preauth]
Jan 26 17:37:00 compute-0 nova_compute[185389]: 2026-01-26 17:37:00.228 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:01 compute-0 openstack_network_exporter[204387]: ERROR   17:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:37:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:37:01 compute-0 openstack_network_exporter[204387]: ERROR   17:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:37:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:01.801 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:01.802 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:01.803 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:02 compute-0 sshd-session[263242]: Connection reset by authenticating user root 176.120.22.13 port 41466 [preauth]
Jan 26 17:37:03 compute-0 nova_compute[185389]: 2026-01-26 17:37:03.090 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:03 compute-0 podman[263246]: 2026-01-26 17:37:03.186847176 +0000 UTC m=+0.066298691 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 26 17:37:03 compute-0 sshd-session[263244]: Connection reset by authenticating user root 176.120.22.13 port 60236 [preauth]
Jan 26 17:37:05 compute-0 sshd-session[263267]: Connection reset by authenticating user root 176.120.22.13 port 60268 [preauth]
Jan 26 17:37:05 compute-0 nova_compute[185389]: 2026-01-26 17:37:05.232 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:06 compute-0 podman[263271]: 2026-01-26 17:37:06.210891455 +0000 UTC m=+0.080703815 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 17:37:06 compute-0 podman[263270]: 2026-01-26 17:37:06.23843033 +0000 UTC m=+0.098043051 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:37:06 compute-0 podman[263272]: 2026-01-26 17:37:06.245215347 +0000 UTC m=+0.110586276 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:37:06 compute-0 podman[263269]: 2026-01-26 17:37:06.261763751 +0000 UTC m=+0.123128880 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, version=9.6, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:37:08 compute-0 nova_compute[185389]: 2026-01-26 17:37:08.094 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:10 compute-0 nova_compute[185389]: 2026-01-26 17:37:10.238 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:12 compute-0 podman[263349]: 2026-01-26 17:37:12.228099744 +0000 UTC m=+0.106332350 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:37:12 compute-0 podman[263350]: 2026-01-26 17:37:12.253542852 +0000 UTC m=+0.120971161 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, version=9.4, distribution-scope=public, managed_by=edpm_ansible, config_id=kepler, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 26 17:37:12 compute-0 podman[263348]: 2026-01-26 17:37:12.264737309 +0000 UTC m=+0.148327531 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:37:13 compute-0 nova_compute[185389]: 2026-01-26 17:37:13.096 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:15 compute-0 nova_compute[185389]: 2026-01-26 17:37:15.243 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:18 compute-0 nova_compute[185389]: 2026-01-26 17:37:18.099 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:20 compute-0 nova_compute[185389]: 2026-01-26 17:37:20.248 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:23 compute-0 nova_compute[185389]: 2026-01-26 17:37:23.103 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:24 compute-0 nova_compute[185389]: 2026-01-26 17:37:24.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:25 compute-0 nova_compute[185389]: 2026-01-26 17:37:25.254 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:26 compute-0 nova_compute[185389]: 2026-01-26 17:37:26.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:26 compute-0 nova_compute[185389]: 2026-01-26 17:37:26.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:26 compute-0 nova_compute[185389]: 2026-01-26 17:37:26.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:37:27 compute-0 nova_compute[185389]: 2026-01-26 17:37:27.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:27 compute-0 nova_compute[185389]: 2026-01-26 17:37:27.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:37:28 compute-0 nova_compute[185389]: 2026-01-26 17:37:28.088 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 17:37:28 compute-0 nova_compute[185389]: 2026-01-26 17:37:28.089 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquired lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 17:37:28 compute-0 nova_compute[185389]: 2026-01-26 17:37:28.089 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 17:37:28 compute-0 nova_compute[185389]: 2026-01-26 17:37:28.104 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:29 compute-0 nova_compute[185389]: 2026-01-26 17:37:29.450 185393 DEBUG nova.network.neutron [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [{"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:37:29 compute-0 nova_compute[185389]: 2026-01-26 17:37:29.479 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Releasing lock "refresh_cache-e833646f-b29a-4fe4-b786-4ee23c6f8a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 17:37:29 compute-0 nova_compute[185389]: 2026-01-26 17:37:29.480 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 17:37:29 compute-0 nova_compute[185389]: 2026-01-26 17:37:29.480 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:29 compute-0 podman[201244]: time="2026-01-26T17:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:37:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 26 17:37:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4396 "" "Go-http-client/1.1"
Jan 26 17:37:30 compute-0 nova_compute[185389]: 2026-01-26 17:37:30.256 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:30 compute-0 nova_compute[185389]: 2026-01-26 17:37:30.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:31 compute-0 openstack_network_exporter[204387]: ERROR   17:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:37:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:37:31 compute-0 openstack_network_exporter[204387]: ERROR   17:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:37:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.706 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.706 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.707 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.707 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.707 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.709 185393 INFO nova.compute.manager [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Terminating instance
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.710 185393 DEBUG nova.compute.manager [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:37:31 compute-0 kernel: tap4ea974be-d9 (unregistering): left promiscuous mode
Jan 26 17:37:31 compute-0 NetworkManager[56253]: <info>  [1769449051.7552] device (tap4ea974be-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:37:31 compute-0 ovn_controller[97699]: 2026-01-26T17:37:31Z|00184|binding|INFO|Releasing lport 4ea974be-d995-4c0f-bbcd-7a1410b167d8 from this chassis (sb_readonly=0)
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.770 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 ovn_controller[97699]: 2026-01-26T17:37:31Z|00185|binding|INFO|Setting lport 4ea974be-d995-4c0f-bbcd-7a1410b167d8 down in Southbound
Jan 26 17:37:31 compute-0 ovn_controller[97699]: 2026-01-26T17:37:31Z|00186|binding|INFO|Removing iface tap4ea974be-d9 ovn-installed in OVS
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.775 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.785 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:9e:d9 10.100.3.123'], port_security=['fa:16:3e:ea:9e:d9 10.100.3.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.123/16', 'neutron:device_id': 'f9b0315f-2a3c-471e-b629-b19d90a40a97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '237a863555d84bd386855d9cf781beb4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc68cb5f-1d27-40d0-8734-5af9ebb54c8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a60e9a2c-a4db-4b50-8dd7-bdfa9e915edf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=4ea974be-d995-4c0f-bbcd-7a1410b167d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.786 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.786 106955 INFO neutron.agent.ovn.metadata.agent [-] Port 4ea974be-d995-4c0f-bbcd-7a1410b167d8 in datapath ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f unbound from our chassis
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.788 106955 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f
Jan 26 17:37:31 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 26 17:37:31 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 7min 1.947s CPU time.
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.820 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db3ce2-add4-4125-8220-c734d4334374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 systemd-machined[156679]: Machine qemu-14-instance-0000000d terminated.
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.858 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[abb82bac-854c-4e71-bf45-c61f89a04bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.862 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[24429a2f-0789-471c-8dd5-6e1e5722026c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.895 238787 DEBUG oslo.privsep.daemon [-] privsep: reply[daf9db31-d707-4592-8002-fa3004a01eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.915 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ce4114-fcab-4c49-adf8-b6839af60433]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad47c1ee-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:d4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691054, 'reachable_time': 20099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263426, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 NetworkManager[56253]: <info>  [1769449051.9386] manager: (tap4ea974be-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.938 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[0f601234-f8b6-4e78-9ac6-2083daffe9e3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad47c1ee-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691072, 'tstamp': 691072}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263428, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapad47c1ee-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691075, 'tstamp': 691075}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263428, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.940 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad47c1ee-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.938 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.947 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 nova_compute[185389]: 2026-01-26 17:37:31.959 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.960 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad47c1ee-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.961 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.961 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad47c1ee-d0, col_values=(('external_ids', {'iface-id': '072b84ed-db94-41f8-b8ae-79603b591704'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:31 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:31.962 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.004 185393 INFO nova.virt.libvirt.driver [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Instance destroyed successfully.
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.005 185393 DEBUG nova.objects.instance [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'resources' on Instance uuid f9b0315f-2a3c-471e-b629-b19d90a40a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.025 185393 DEBUG nova.virt.libvirt.vif [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:24:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-qnk6sbr7rvkm-ziuhg6hort2c',id=13,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:24:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-o33mgm0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:24:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=f9b0315f-2a3c-471e-b629-b19d90a40a97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.026 185393 DEBUG nova.network.os_vif_util [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "address": "fa:16:3e:ea:9e:d9", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ea974be-d9", "ovs_interfaceid": "4ea974be-d995-4c0f-bbcd-7a1410b167d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.027 185393 DEBUG nova.network.os_vif_util [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.027 185393 DEBUG os_vif [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.029 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.030 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ea974be-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.033 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.035 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.040 185393 INFO os_vif [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:9e:d9,bridge_name='br-int',has_traffic_filtering=True,id=4ea974be-d995-4c0f-bbcd-7a1410b167d8,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ea974be-d9')
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.041 185393 INFO nova.virt.libvirt.driver [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Deleting instance files /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97_del
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.042 185393 INFO nova.virt.libvirt.driver [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Deletion of /var/lib/nova/instances/f9b0315f-2a3c-471e-b629-b19d90a40a97_del complete
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.099 185393 INFO nova.compute.manager [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.099 185393 DEBUG oslo.service.loopingcall [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.100 185393 DEBUG nova.compute.manager [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:37:32 compute-0 nova_compute[185389]: 2026-01-26 17:37:32.101 185393 DEBUG nova.network.neutron [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:37:33 compute-0 nova_compute[185389]: 2026-01-26 17:37:33.107 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:34.128 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0e:35:12', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'de:09:3c:73:7d:ab'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.129 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:34 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:34.130 106955 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.182 185393 DEBUG nova.compute.manager [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-unplugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.183 185393 DEBUG oslo_concurrency.lockutils [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.185 185393 DEBUG oslo_concurrency.lockutils [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.186 185393 DEBUG oslo_concurrency.lockutils [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.187 185393 DEBUG nova.compute.manager [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] No waiting events found dispatching network-vif-unplugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.188 185393 DEBUG nova.compute.manager [req-6c38fc1a-511f-4f3e-b85c-191dfb8f20f3 req-49dbe98d-153c-4ab4-864f-3aa3fee38ab2 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-unplugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:37:34 compute-0 podman[263443]: 2026-01-26 17:37:34.241653536 +0000 UTC m=+0.108231291 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.444 185393 DEBUG nova.network.neutron [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.466 185393 INFO nova.compute.manager [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Took 2.37 seconds to deallocate network for instance.
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.523 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.523 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.637 185393 DEBUG nova.compute.provider_tree [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.653 185393 DEBUG nova.scheduler.client.report [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.675 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.709 185393 INFO nova.scheduler.client.report [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Deleted allocations for instance f9b0315f-2a3c-471e-b629-b19d90a40a97
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.825 185393 DEBUG oslo_concurrency.lockutils [None req-a51b929f-e716-415a-8a0d-55075288c370 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:34 compute-0 nova_compute[185389]: 2026-01-26 17:37:34.951 185393 DEBUG nova.compute.manager [req-d724b59c-f137-4b34-851f-33d15476d31d req-9b993575-8ff5-4976-b069-e8cd845df0c6 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-deleted-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.287 185393 DEBUG nova.compute.manager [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.287 185393 DEBUG oslo_concurrency.lockutils [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.287 185393 DEBUG oslo_concurrency.lockutils [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.287 185393 DEBUG oslo_concurrency.lockutils [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "f9b0315f-2a3c-471e-b629-b19d90a40a97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.288 185393 DEBUG nova.compute.manager [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] No waiting events found dispatching network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:37:36 compute-0 nova_compute[185389]: 2026-01-26 17:37:36.288 185393 WARNING nova.compute.manager [req-efc57437-29cc-468c-bb32-b5ce9b223e2d req-027857c3-a2f8-4814-bfa8-a332ef4b1319 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Received unexpected event network-vif-plugged-4ea974be-d995-4c0f-bbcd-7a1410b167d8 for instance with vm_state deleted and task_state None.
Jan 26 17:37:37 compute-0 nova_compute[185389]: 2026-01-26 17:37:37.034 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:37 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:37.132 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1c72c11d-5050-47c3-89e8-912766588fb3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:37 compute-0 podman[263466]: 2026-01-26 17:37:37.206801139 +0000 UTC m=+0.088725236 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120)
Jan 26 17:37:37 compute-0 podman[263468]: 2026-01-26 17:37:37.209664398 +0000 UTC m=+0.084038158 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:37:37 compute-0 podman[263465]: 2026-01-26 17:37:37.209922095 +0000 UTC m=+0.096695915 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter)
Jan 26 17:37:37 compute-0 podman[263467]: 2026-01-26 17:37:37.23819323 +0000 UTC m=+0.111247484 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.110 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.784 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.785 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.785 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.871 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.951 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:37:38 compute-0 nova_compute[185389]: 2026-01-26 17:37:38.953 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.022 185393 DEBUG oslo_concurrency.processutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.420 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.422 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5074MB free_disk=72.31377410888672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.423 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.423 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.505 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Instance e833646f-b29a-4fe4-b786-4ee23c6f8a82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.506 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.506 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.559 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.579 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.601 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:37:39 compute-0 nova_compute[185389]: 2026-01-26 17:37:39.602 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:42 compute-0 nova_compute[185389]: 2026-01-26 17:37:42.037 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:43 compute-0 nova_compute[185389]: 2026-01-26 17:37:43.111 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:43 compute-0 podman[263552]: 2026-01-26 17:37:43.213352263 +0000 UTC m=+0.093340282 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 17:37:43 compute-0 podman[263553]: 2026-01-26 17:37:43.215253316 +0000 UTC m=+0.086480045 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Jan 26 17:37:43 compute-0 podman[263551]: 2026-01-26 17:37:43.254196605 +0000 UTC m=+0.144029814 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.277 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.278 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.278 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.279 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.280 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.281 185393 INFO nova.compute.manager [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Terminating instance
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.282 185393 DEBUG nova.compute.manager [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 17:37:44 compute-0 kernel: tapd4acf2b5-65 (unregistering): left promiscuous mode
Jan 26 17:37:44 compute-0 NetworkManager[56253]: <info>  [1769449064.3171] device (tapd4acf2b5-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 17:37:44 compute-0 ovn_controller[97699]: 2026-01-26T17:37:44Z|00187|binding|INFO|Releasing lport d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 from this chassis (sb_readonly=0)
Jan 26 17:37:44 compute-0 ovn_controller[97699]: 2026-01-26T17:37:44Z|00188|binding|INFO|Setting lport d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 down in Southbound
Jan 26 17:37:44 compute-0 ovn_controller[97699]: 2026-01-26T17:37:44Z|00189|binding|INFO|Removing iface tapd4acf2b5-65 ovn-installed in OVS
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.327 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.334 106955 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a8:b1 10.100.0.222'], port_security=['fa:16:3e:80:a8:b1 10.100.0.222'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.222/16', 'neutron:device_id': 'e833646f-b29a-4fe4-b786-4ee23c6f8a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '237a863555d84bd386855d9cf781beb4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc68cb5f-1d27-40d0-8734-5af9ebb54c8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a60e9a2c-a4db-4b50-8dd7-bdfa9e915edf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>], logical_port=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7faee18cfdf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.335 106955 INFO neutron.agent.ovn.metadata.agent [-] Port d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 in datapath ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f unbound from our chassis
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.337 106955 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.338 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[1799d4cd-e5c6-4652-a376-fe37995e3cc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.339 106955 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f namespace which is not needed anymore
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.347 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 26 17:37:44 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 6min 37.153s CPU time.
Jan 26 17:37:44 compute-0 systemd-machined[156679]: Machine qemu-17-instance-00000010 terminated.
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.513 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [NOTICE]   (258526) : haproxy version is 2.8.14-c23fe91
Jan 26 17:37:44 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [NOTICE]   (258526) : path to executable is /usr/sbin/haproxy
Jan 26 17:37:44 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [WARNING]  (258526) : Exiting Master process...
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.519 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [ALERT]    (258526) : Current worker (258528) exited with code 143 (Terminated)
Jan 26 17:37:44 compute-0 neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f[258522]: [WARNING]  (258526) : All workers exited. Exiting... (0)
Jan 26 17:37:44 compute-0 systemd[1]: libpod-f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8.scope: Deactivated successfully.
Jan 26 17:37:44 compute-0 podman[263641]: 2026-01-26 17:37:44.532297828 +0000 UTC m=+0.066823685 container died f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8-userdata-shm.mount: Deactivated successfully.
Jan 26 17:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-004a1f2e6a5973c77922b17c81b2560a93c0489232455d8429f76fca2518fa37-merged.mount: Deactivated successfully.
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.580 185393 INFO nova.virt.libvirt.driver [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Instance destroyed successfully.
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.581 185393 DEBUG nova.objects.instance [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lazy-loading 'resources' on Instance uuid e833646f-b29a-4fe4-b786-4ee23c6f8a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 17:37:44 compute-0 podman[263641]: 2026-01-26 17:37:44.588154871 +0000 UTC m=+0.122680738 container cleanup f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 17:37:44 compute-0 systemd[1]: libpod-conmon-f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8.scope: Deactivated successfully.
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.601 185393 DEBUG nova.virt.libvirt.vif [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T17:27:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9772802-asg-iiflfa6ewgov-5e5s3ztwtke7-uezresy5bjmm',id=16,image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T17:27:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='21873820-28a9-4731-9256-efbf2eb46b4d'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='237a863555d84bd386855d9cf781beb4',ramdisk_id='',reservation_id='r-mdspyvmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a3153c85-d830-4fd6-8cd6-1a69e6723a9e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-2035201521',owner_user_name='tempest-PrometheusGabbiTest-2035201521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T17:27:46Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5ca35c18e54b493f9efdfe2218cce3c7',uuid=e833646f-b29a-4fe4-b786-4ee23c6f8a82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.601 185393 DEBUG nova.network.os_vif_util [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converting VIF {"id": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "address": "fa:16:3e:80:a8:b1", "network": {"id": "ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.222", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "237a863555d84bd386855d9cf781beb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4acf2b5-65", "ovs_interfaceid": "d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.602 185393 DEBUG nova.network.os_vif_util [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.603 185393 DEBUG os_vif [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.604 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.605 185393 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4acf2b5-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.609 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.611 185393 INFO os_vif [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:a8:b1,bridge_name='br-int',has_traffic_filtering=True,id=d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7,network=Network(ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4acf2b5-65')
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.612 185393 INFO nova.virt.libvirt.driver [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Deleting instance files /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82_del
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.613 185393 INFO nova.virt.libvirt.driver [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Deletion of /var/lib/nova/instances/e833646f-b29a-4fe4-b786-4ee23c6f8a82_del complete
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.652 185393 DEBUG nova.compute.manager [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-unplugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.653 185393 DEBUG oslo_concurrency.lockutils [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.654 185393 DEBUG oslo_concurrency.lockutils [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.654 185393 DEBUG oslo_concurrency.lockutils [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.655 185393 DEBUG nova.compute.manager [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] No waiting events found dispatching network-vif-unplugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.655 185393 DEBUG nova.compute.manager [req-aff62178-8b93-4cc4-b61f-68f139af5fa0 req-e415d2f7-c9bd-4ed0-bcbf-906335e6bd2e 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-unplugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.677 185393 INFO nova.compute.manager [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.678 185393 DEBUG oslo.service.loopingcall [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.679 185393 DEBUG nova.compute.manager [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.679 185393 DEBUG nova.network.neutron [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 17:37:44 compute-0 podman[263688]: 2026-01-26 17:37:44.686633394 +0000 UTC m=+0.062347942 container remove f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.704 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[57f45a94-4df1-4112-95f0-82493ddc79c9]: (4, ('Mon Jan 26 05:37:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f (f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8)\nf11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8\nMon Jan 26 05:37:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f (f11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8)\nf11e9df156ecc511226170db45ec5176ba38d57a473fd016daa9bb147140b5b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.706 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[002e9ea8-1814-4473-a720-9ff1e7491794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.707 106955 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad47c1ee-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 17:37:44 compute-0 kernel: tapad47c1ee-d0: left promiscuous mode
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.709 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 nova_compute[185389]: 2026-01-26 17:37:44.722 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.726 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[58461a97-0757-4001-96a2-19a84ef442b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.745 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[2af1e41b-ff23-4a7d-be99-01be8216dbda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.747 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3bf1ef-6792-4b78-830b-955963b8ba72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.771 238734 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd0174d-68e0-4ff4-a8bb-36613d8399e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691044, 'reachable_time': 29815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263702, 'error': None, 'target': 'ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dad47c1ee\x2dd81b\x2d4f9f\x2d9b3c\x2ddb0c3229e17f.mount: Deactivated successfully.
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.777 107449 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad47c1ee-d81b-4f9f-9b3c-db0c3229e17f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 17:37:44 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:37:44.777 107449 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d17551-d573-460e-97d5-7cd9c381985b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.173 185393 DEBUG nova.network.neutron [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.199 185393 INFO nova.compute.manager [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Took 1.52 seconds to deallocate network for instance.
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.267 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.268 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.334 185393 DEBUG nova.compute.provider_tree [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.357 185393 DEBUG nova.scheduler.client.report [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.398 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.435 185393 INFO nova.scheduler.client.report [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Deleted allocations for instance e833646f-b29a-4fe4-b786-4ee23c6f8a82
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.519 185393 DEBUG oslo_concurrency.lockutils [None req-fb8be905-3e9c-4cf9-b63f-11aba8b07e25 5ca35c18e54b493f9efdfe2218cce3c7 237a863555d84bd386855d9cf781beb4 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.596 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.772 185393 DEBUG nova.compute.manager [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.772 185393 DEBUG oslo_concurrency.lockutils [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Acquiring lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.773 185393 DEBUG oslo_concurrency.lockutils [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.773 185393 DEBUG oslo_concurrency.lockutils [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] Lock "e833646f-b29a-4fe4-b786-4ee23c6f8a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.773 185393 DEBUG nova.compute.manager [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] No waiting events found dispatching network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.774 185393 WARNING nova.compute.manager [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received unexpected event network-vif-plugged-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 for instance with vm_state deleted and task_state None.
Jan 26 17:37:46 compute-0 nova_compute[185389]: 2026-01-26 17:37:46.774 185393 DEBUG nova.compute.manager [req-07e1d773-2b5a-49da-b51d-42760833c95d req-c62a9d77-25d1-4da7-ac7a-7ccdec8801a7 37758c78dca8435eb8df6269e186097f 3ad65f26e00f403ab7e28233d458a9c7 - - default default] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Received event network-vif-deleted-d4acf2b5-6510-4f4d-b2a0-e986d8b8d2f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 17:37:47 compute-0 nova_compute[185389]: 2026-01-26 17:37:47.001 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769449051.9998353, f9b0315f-2a3c-471e-b629-b19d90a40a97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:37:47 compute-0 nova_compute[185389]: 2026-01-26 17:37:47.002 185393 INFO nova.compute.manager [-] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] VM Stopped (Lifecycle Event)
Jan 26 17:37:47 compute-0 nova_compute[185389]: 2026-01-26 17:37:47.025 185393 DEBUG nova.compute.manager [None req-c024b69f-cabb-49e4-a2bc-db652aede4c7 - - - - - -] [instance: f9b0315f-2a3c-471e-b629-b19d90a40a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:37:47 compute-0 nova_compute[185389]: 2026-01-26 17:37:47.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:48 compute-0 nova_compute[185389]: 2026-01-26 17:37:48.113 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:49 compute-0 nova_compute[185389]: 2026-01-26 17:37:49.609 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:53 compute-0 nova_compute[185389]: 2026-01-26 17:37:53.115 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:54 compute-0 nova_compute[185389]: 2026-01-26 17:37:54.613 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:58 compute-0 nova_compute[185389]: 2026-01-26 17:37:58.117 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:59 compute-0 nova_compute[185389]: 2026-01-26 17:37:59.523 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:59 compute-0 nova_compute[185389]: 2026-01-26 17:37:59.573 185393 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769449064.5714862, e833646f-b29a-4fe4-b786-4ee23c6f8a82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 17:37:59 compute-0 nova_compute[185389]: 2026-01-26 17:37:59.574 185393 INFO nova.compute.manager [-] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] VM Stopped (Lifecycle Event)
Jan 26 17:37:59 compute-0 nova_compute[185389]: 2026-01-26 17:37:59.598 185393 DEBUG nova.compute.manager [None req-ed475e0b-28ef-4666-91c8-52c0373febd7 - - - - - -] [instance: e833646f-b29a-4fe4-b786-4ee23c6f8a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 17:37:59 compute-0 nova_compute[185389]: 2026-01-26 17:37:59.615 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:59 compute-0 podman[201244]: time="2026-01-26T17:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:37:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:37:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Jan 26 17:38:01 compute-0 openstack_network_exporter[204387]: ERROR   17:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:38:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:38:01 compute-0 openstack_network_exporter[204387]: ERROR   17:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:38:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:38:01.802 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:38:01.803 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:38:01.803 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:03 compute-0 nova_compute[185389]: 2026-01-26 17:38:03.120 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:04 compute-0 nova_compute[185389]: 2026-01-26 17:38:04.619 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:05 compute-0 podman[263704]: 2026-01-26 17:38:05.215125258 +0000 UTC m=+0.090728050 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:38:08 compute-0 nova_compute[185389]: 2026-01-26 17:38:08.121 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:08 compute-0 podman[263729]: 2026-01-26 17:38:08.246055284 +0000 UTC m=+0.109245289 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:38:08 compute-0 podman[263727]: 2026-01-26 17:38:08.246609229 +0000 UTC m=+0.112004614 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible)
Jan 26 17:38:08 compute-0 podman[263728]: 2026-01-26 17:38:08.252886471 +0000 UTC m=+0.110132773 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 26 17:38:08 compute-0 podman[263730]: 2026-01-26 17:38:08.262983489 +0000 UTC m=+0.108054467 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 26 17:38:09 compute-0 nova_compute[185389]: 2026-01-26 17:38:09.622 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:13 compute-0 nova_compute[185389]: 2026-01-26 17:38:13.123 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:14 compute-0 podman[263808]: 2026-01-26 17:38:14.203847192 +0000 UTC m=+0.075956195 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:38:14 compute-0 podman[263807]: 2026-01-26 17:38:14.204282604 +0000 UTC m=+0.086081894 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:38:14 compute-0 podman[263806]: 2026-01-26 17:38:14.222786441 +0000 UTC m=+0.108041026 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 26 17:38:14 compute-0 nova_compute[185389]: 2026-01-26 17:38:14.625 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:18 compute-0 nova_compute[185389]: 2026-01-26 17:38:18.125 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:19 compute-0 nova_compute[185389]: 2026-01-26 17:38:19.628 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:23 compute-0 nova_compute[185389]: 2026-01-26 17:38:23.128 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:24 compute-0 nova_compute[185389]: 2026-01-26 17:38:24.631 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:26 compute-0 nova_compute[185389]: 2026-01-26 17:38:26.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:27 compute-0 nova_compute[185389]: 2026-01-26 17:38:27.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:27 compute-0 nova_compute[185389]: 2026-01-26 17:38:27.720 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:38:27 compute-0 nova_compute[185389]: 2026-01-26 17:38:27.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:38:27 compute-0 nova_compute[185389]: 2026-01-26 17:38:27.743 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:38:27 compute-0 nova_compute[185389]: 2026-01-26 17:38:27.745 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:28 compute-0 nova_compute[185389]: 2026-01-26 17:38:28.130 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:28 compute-0 nova_compute[185389]: 2026-01-26 17:38:28.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:28 compute-0 nova_compute[185389]: 2026-01-26 17:38:28.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:28 compute-0 nova_compute[185389]: 2026-01-26 17:38:28.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:38:29 compute-0 nova_compute[185389]: 2026-01-26 17:38:29.635 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:29 compute-0 podman[201244]: time="2026-01-26T17:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:38:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:38:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.364 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.364 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.364 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f04ce8af020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.365 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ad910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d1783140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04d20d6210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.366 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ada90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adb20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8adbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8af440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8afe60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.367 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeea0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeed0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8ac740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aef90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f04ce8aeff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f04cf9a2a20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.368 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f04ce8af080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f04ce8af0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f04ce8af470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f04ce8ac260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.369 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f04cfa81c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f04ce8ac170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f04ce8af140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f04ce8adb50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.370 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f04ce8ad9a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f04ce8af1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f04ce8ad8e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f04ce8ada60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f04ce8adaf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f04ce8ad880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f04ce8af3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f04ce8af6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f04ce8af410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f04ce8ac470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.372 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f04d17142f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f04ce8afe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f04ce8aede0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f04ce8aef00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f04ce8ac710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.373 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f04ce8aef60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.374 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f04ce8aefc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f04cfa23d70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.374 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 ceilometer_agent_compute[195095]: 2026-01-26 17:38:31.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 26 17:38:31 compute-0 openstack_network_exporter[204387]: ERROR   17:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:38:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:38:31 compute-0 openstack_network_exporter[204387]: ERROR   17:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:38:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:38:31 compute-0 nova_compute[185389]: 2026-01-26 17:38:31.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:33 compute-0 nova_compute[185389]: 2026-01-26 17:38:33.134 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:34 compute-0 nova_compute[185389]: 2026-01-26 17:38:34.640 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:36 compute-0 podman[263873]: 2026-01-26 17:38:36.228316899 +0000 UTC m=+0.095545073 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:38:36 compute-0 ovn_controller[97699]: 2026-01-26T17:38:36Z|00190|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 26 17:38:37 compute-0 nova_compute[185389]: 2026-01-26 17:38:37.608 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:38 compute-0 nova_compute[185389]: 2026-01-26 17:38:38.137 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:38 compute-0 nova_compute[185389]: 2026-01-26 17:38:38.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.075 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.076 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.077 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.077 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:38:39 compute-0 podman[263898]: 2026-01-26 17:38:39.231462274 +0000 UTC m=+0.099902563 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 17:38:39 compute-0 podman[263897]: 2026-01-26 17:38:39.231757212 +0000 UTC m=+0.106832003 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120)
Jan 26 17:38:39 compute-0 podman[263899]: 2026-01-26 17:38:39.247264597 +0000 UTC m=+0.116352904 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 17:38:39 compute-0 podman[263896]: 2026-01-26 17:38:39.263731459 +0000 UTC m=+0.136805686 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.473 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.474 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5332MB free_disk=72.34253692626953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.474 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.475 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.644 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.868 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.869 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.884 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing inventories for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.908 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating ProviderTree inventory for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.909 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Updating inventory in ProviderTree for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.927 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing aggregate associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.955 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Refreshing trait associations for resource provider b0bb5d31-f35b-4a04-b67d-66acc24fb822, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:38:39 compute-0 nova_compute[185389]: 2026-01-26 17:38:39.979 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:38:40 compute-0 nova_compute[185389]: 2026-01-26 17:38:40.004 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:38:40 compute-0 nova_compute[185389]: 2026-01-26 17:38:40.035 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:38:40 compute-0 nova_compute[185389]: 2026-01-26 17:38:40.036 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:40 compute-0 nova_compute[185389]: 2026-01-26 17:38:40.037 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:40 compute-0 nova_compute[185389]: 2026-01-26 17:38:40.038 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:38:43 compute-0 nova_compute[185389]: 2026-01-26 17:38:43.138 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:43 compute-0 nova_compute[185389]: 2026-01-26 17:38:43.737 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:43 compute-0 nova_compute[185389]: 2026-01-26 17:38:43.738 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:38:43 compute-0 nova_compute[185389]: 2026-01-26 17:38:43.774 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:38:44 compute-0 nova_compute[185389]: 2026-01-26 17:38:44.648 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:44 compute-0 podman[263975]: 2026-01-26 17:38:44.787468695 +0000 UTC m=+0.088999824 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:38:44 compute-0 podman[263976]: 2026-01-26 17:38:44.80914626 +0000 UTC m=+0.099706428 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Jan 26 17:38:44 compute-0 podman[263974]: 2026-01-26 17:38:44.820330257 +0000 UTC m=+0.122862803 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:38:45 compute-0 nova_compute[185389]: 2026-01-26 17:38:45.752 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:48 compute-0 nova_compute[185389]: 2026-01-26 17:38:48.144 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:48 compute-0 nova_compute[185389]: 2026-01-26 17:38:48.714 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:49 compute-0 nova_compute[185389]: 2026-01-26 17:38:49.651 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:49 compute-0 nova_compute[185389]: 2026-01-26 17:38:49.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:53 compute-0 nova_compute[185389]: 2026-01-26 17:38:53.145 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:54 compute-0 nova_compute[185389]: 2026-01-26 17:38:54.654 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:58 compute-0 nova_compute[185389]: 2026-01-26 17:38:58.145 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:59 compute-0 nova_compute[185389]: 2026-01-26 17:38:59.658 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:59 compute-0 podman[201244]: time="2026-01-26T17:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:38:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:38:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3935 "" "Go-http-client/1.1"
Jan 26 17:39:01 compute-0 openstack_network_exporter[204387]: ERROR   17:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:39:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:39:01 compute-0 openstack_network_exporter[204387]: ERROR   17:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:39:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:39:01.804 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:39:01.804 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:39:01.805 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:03 compute-0 nova_compute[185389]: 2026-01-26 17:39:03.148 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:04 compute-0 nova_compute[185389]: 2026-01-26 17:39:04.661 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:07 compute-0 podman[264034]: 2026-01-26 17:39:07.207415213 +0000 UTC m=+0.094223687 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 26 17:39:08 compute-0 nova_compute[185389]: 2026-01-26 17:39:08.150 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:08 compute-0 nova_compute[185389]: 2026-01-26 17:39:08.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:09 compute-0 nova_compute[185389]: 2026-01-26 17:39:09.666 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:10 compute-0 podman[264059]: 2026-01-26 17:39:10.255208283 +0000 UTC m=+0.137541056 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, io.buildah.version=1.41.4)
Jan 26 17:39:10 compute-0 podman[264058]: 2026-01-26 17:39:10.256035055 +0000 UTC m=+0.135662573 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 17:39:10 compute-0 podman[264061]: 2026-01-26 17:39:10.261865945 +0000 UTC m=+0.136164717 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 26 17:39:10 compute-0 podman[264060]: 2026-01-26 17:39:10.274689947 +0000 UTC m=+0.146601333 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:39:13 compute-0 nova_compute[185389]: 2026-01-26 17:39:13.153 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:14 compute-0 nova_compute[185389]: 2026-01-26 17:39:14.668 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:15 compute-0 podman[264133]: 2026-01-26 17:39:15.244029258 +0000 UTC m=+0.109969809 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 26 17:39:15 compute-0 podman[264134]: 2026-01-26 17:39:15.261696193 +0000 UTC m=+0.112568350 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, architecture=x86_64, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Jan 26 17:39:15 compute-0 podman[264132]: 2026-01-26 17:39:15.281132616 +0000 UTC m=+0.154344256 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:39:18 compute-0 nova_compute[185389]: 2026-01-26 17:39:18.155 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:19 compute-0 nova_compute[185389]: 2026-01-26 17:39:19.672 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:23 compute-0 nova_compute[185389]: 2026-01-26 17:39:23.157 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:24 compute-0 nova_compute[185389]: 2026-01-26 17:39:24.675 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:27 compute-0 nova_compute[185389]: 2026-01-26 17:39:27.741 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:28 compute-0 nova_compute[185389]: 2026-01-26 17:39:28.160 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:28 compute-0 nova_compute[185389]: 2026-01-26 17:39:28.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:28 compute-0 nova_compute[185389]: 2026-01-26 17:39:28.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:39:28 compute-0 nova_compute[185389]: 2026-01-26 17:39:28.722 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:39:28 compute-0 nova_compute[185389]: 2026-01-26 17:39:28.757 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:39:29 compute-0 nova_compute[185389]: 2026-01-26 17:39:29.680 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:29 compute-0 nova_compute[185389]: 2026-01-26 17:39:29.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:29 compute-0 nova_compute[185389]: 2026-01-26 17:39:29.720 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:29 compute-0 nova_compute[185389]: 2026-01-26 17:39:29.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:29 compute-0 nova_compute[185389]: 2026-01-26 17:39:29.721 185393 DEBUG nova.compute.manager [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:39:29 compute-0 podman[201244]: time="2026-01-26T17:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:39:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:39:29 compute-0 podman[201244]: @ - - [26/Jan/2026:17:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3932 "" "Go-http-client/1.1"
Jan 26 17:39:31 compute-0 openstack_network_exporter[204387]: ERROR   17:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:39:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:39:31 compute-0 openstack_network_exporter[204387]: ERROR   17:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:39:31 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:39:31 compute-0 nova_compute[185389]: 2026-01-26 17:39:31.721 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:33 compute-0 nova_compute[185389]: 2026-01-26 17:39:33.163 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:34 compute-0 nova_compute[185389]: 2026-01-26 17:39:34.685 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:38 compute-0 nova_compute[185389]: 2026-01-26 17:39:38.167 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:38 compute-0 podman[264195]: 2026-01-26 17:39:38.200131049 +0000 UTC m=+0.071585025 container health_status 25f6662ba8abf3582ed86b3c1b745f3e264826e4a478aad811b7aa0f0598ea64 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 26 17:39:39 compute-0 nova_compute[185389]: 2026-01-26 17:39:39.690 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:40 compute-0 nova_compute[185389]: 2026-01-26 17:39:40.719 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:40 compute-0 nova_compute[185389]: 2026-01-26 17:39:40.831 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:40 compute-0 nova_compute[185389]: 2026-01-26 17:39:40.832 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:40 compute-0 nova_compute[185389]: 2026-01-26 17:39:40.832 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:40 compute-0 nova_compute[185389]: 2026-01-26 17:39:40.832 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:39:41 compute-0 podman[264219]: 2026-01-26 17:39:41.216374483 +0000 UTC m=+0.092507319 container health_status 881610e1b1cf6f79d32229df6584abe0a5c1eaa225c3798efd50ba46a46a6ce6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:39:41 compute-0 podman[264218]: 2026-01-26 17:39:41.217040261 +0000 UTC m=+0.095371688 container health_status 5d67dbe606d3646749758bc44b9a3d2dd2ca1672995157a4aabd7f4c67a084b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:39:41 compute-0 podman[264220]: 2026-01-26 17:39:41.221677989 +0000 UTC m=+0.094353051 container health_status 89fb3a33189f20a1a2358a7171432db7283cb2af55b9a041da611e3b4ea46633 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.224 185393 WARNING nova.virt.libvirt.driver [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.225 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5338MB free_disk=72.34256744384766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.225 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.225 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:41 compute-0 podman[264217]: 2026-01-26 17:39:41.229513664 +0000 UTC m=+0.107226553 container health_status 2fe3442c6652559eed786342e43f43d9cea8e1150737dd54eccfd3a0619b8069 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=)
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.357 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.358 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.393 185393 DEBUG nova.compute.provider_tree [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed in ProviderTree for provider: b0bb5d31-f35b-4a04-b67d-66acc24fb822 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.441 185393 DEBUG nova.scheduler.client.report [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Inventory has not changed for provider b0bb5d31-f35b-4a04-b67d-66acc24fb822 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.444 185393 DEBUG nova.compute.resource_tracker [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:39:41 compute-0 nova_compute[185389]: 2026-01-26 17:39:41.445 185393 DEBUG oslo_concurrency.lockutils [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:41 compute-0 sshd-session[264294]: Accepted publickey for zuul from 192.168.122.10 port 59698 ssh2: ECDSA SHA256:e2nTWydmUlQnW5BYhfFSR+TRarHnUqn0luI8Mjiyqhk
Jan 26 17:39:42 compute-0 systemd-logind[788]: New session 32 of user zuul.
Jan 26 17:39:42 compute-0 systemd[1]: Started Session 32 of User zuul.
Jan 26 17:39:42 compute-0 sshd-session[264294]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 17:39:42 compute-0 sudo[264298]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 26 17:39:42 compute-0 sudo[264298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:39:43 compute-0 nova_compute[185389]: 2026-01-26 17:39:43.168 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:44 compute-0 nova_compute[185389]: 2026-01-26 17:39:44.694 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:46 compute-0 podman[264436]: 2026-01-26 17:39:46.202600769 +0000 UTC m=+0.083408950 container health_status 9f2a35b9a78447832bbc773f390478d918e82d4256772aeae01e380acf2c9990 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:39:46 compute-0 podman[264437]: 2026-01-26 17:39:46.223082771 +0000 UTC m=+0.098231487 container health_status d6b6f850dc565a03419afeaaceded6b03cb17af4417a2aa43b56e992a36dd285 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-type=git, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Jan 26 17:39:46 compute-0 podman[264435]: 2026-01-26 17:39:46.280655751 +0000 UTC m=+0.161394300 container health_status 6642c03f49bd8782ec7c846a713718298e54cae1da1aa85ae8b4af9498ffff4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6a9c71e0c3c29cc7bdc7dc9d7ca22e4a154d39fb6f6853e0c797d822ccc5d30c-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435-b15c16f6907abf8db56e76d0993fb4238b1c423129e7369914133c9a83a95435'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:39:46 compute-0 nova_compute[185389]: 2026-01-26 17:39:46.441 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:47 compute-0 ovs-vsctl[264527]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 26 17:39:48 compute-0 nova_compute[185389]: 2026-01-26 17:39:48.171 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:48 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 264322 (sos)
Jan 26 17:39:48 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 26 17:39:48 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 26 17:39:48 compute-0 virtqemud[185114]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 26 17:39:49 compute-0 virtqemud[185114]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 26 17:39:49 compute-0 virtqemud[185114]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 17:39:49 compute-0 nova_compute[185389]: 2026-01-26 17:39:49.697 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:49 compute-0 nova_compute[185389]: 2026-01-26 17:39:49.718 185393 DEBUG oslo_service.periodic_task [None req-4ea071b8-7fd3-4774-b5cf-e1e7c9591a48 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:50 compute-0 crontab[264950]: (root) LIST (root)
Jan 26 17:39:52 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 17:39:53 compute-0 systemd[1]: Started Hostname Service.
Jan 26 17:39:53 compute-0 nova_compute[185389]: 2026-01-26 17:39:53.173 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:54 compute-0 nova_compute[185389]: 2026-01-26 17:39:54.702 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:58 compute-0 nova_compute[185389]: 2026-01-26 17:39:58.177 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:59 compute-0 nova_compute[185389]: 2026-01-26 17:39:59.705 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:59 compute-0 podman[201244]: time="2026-01-26T17:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 26 17:39:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 26 17:39:59 compute-0 podman[201244]: @ - - [26/Jan/2026:17:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3935 "" "Go-http-client/1.1"
Jan 26 17:40:01 compute-0 openstack_network_exporter[204387]: ERROR   17:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 26 17:40:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:40:01 compute-0 openstack_network_exporter[204387]: ERROR   17:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 26 17:40:01 compute-0 openstack_network_exporter[204387]: 
Jan 26 17:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:40:01.805 106955 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:40:01.806 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:01 compute-0 ovn_metadata_agent[106950]: 2026-01-26 17:40:01.806 106955 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:01 compute-0 ovs-appctl[266219]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:40:01 compute-0 ovs-appctl[266224]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:40:01 compute-0 ovs-appctl[266229]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:40:03 compute-0 nova_compute[185389]: 2026-01-26 17:40:03.177 185393 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
